OC logo
Libraries Home Start Researching Subject Guides Services About Us
Skip to Main Content

AI Literacy

There are multiple levels of bias affecting AI output

iceberg with a section labelled computation bias showing above the waterline while sections labelled Human bias and Systemic bias are hidden below.

 

Graphic credit- N. Hanacek/NIST

AI-generated content regularly demonstrates evidence of bias, sometimes obvious, but sometimes subtle and difficult to detect. There are compounded layers of bias starting from data collection and labeling, extending through the training phase, and continuing on through deployment. This is an ethical concern for the creators of AI, but responsible users must also be aware of this. ALWAYS use a critical eye to evaluate answers and use the built-in reporting feature when you encounter bias problems.

Recall that in both the classroom and your career, you are responsible for your work so YOU must critically examine AI-generated content for bias prior to use.

Examples of AI Bias and Its Impact

"Biased AI can give consistently different outputs for certain groups compared to others. Biased outputs can discriminate based on race, gender, biological sex, nationality, social class, or many other factors. Human beings choose the data that algorithms use, and even if these humans make conscious efforts to eschew bias, it can still be baked into the data they select. Extensive testing and diverse teams can act as effective safeguards, but even with these measures in place, bias can still enter machine-learning processes. AI systems then automate and perpetuate biased models."

Rutgers- Battling Bias in AI

GenAI output reflect biases in the prompt

The wording used to create a prompt will influence the answer provided. See the example on the page linked below.

A Bias & Fairness Audit Toolkit