OC logo
Libraries Home Start Researching Subject Guides Services About Us
Skip to Main Content

AI Literacy

There are multiple levels of bias affecting AI output

iceberg with a section labelled computation bias showing above the waterline while sections labelled Human bias and Systemic bias are hidden below.

 

Graphic credit- N. Hanacek/NIST

AI-generated content regularly demonstrates evidence of bias, sometimes it is very obvious, but sometimes it is subtle and difficult to detect. There are compounded layers of bias. This is an ethical concern for the creators of AI, but responsible users must also be aware of this. ALWAYS use a critical eye to evaluate answers and use the built-in reporting feature when you encounter these problems.

Recall that in both the classroom and your career, you are responsible for your work so YOU must critically examine AI-generated responses for bias prior to use. 

 

Examples of AI Bias and Its Impact

October, 2023, the IBM Data and AI Team. published  Shedding light on AI bias with real world examples on the official IBM Blog.

October, 2023, Scientific American published the article, Humans Absorb Bias from AI—And Keep It after They Stop Using the Algorithm, by Lauren Leffer.

Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: A systematic review. Smart Learning Environments, 11(1), 28. https://doi.org/10.1186/s40561-024-00316-7

"Biased AI can give consistently different outputs for certain groups compared to others. Biased outputs can discriminate based on race, gender, biological sex, nationality, social class, or many other factors. Human beings choose the data that algorithms use, and even if these humans make conscious efforts to eschew bias, it can still be baked into the data they select. Extensive testing and diverse teams can act as effective safeguards, but even with these measures in place, bias can still enter machine-learning processes. AI systems then automate and perpetuate biased models."

Rutgers- Battling Bias in AI