OC logo
Libraries Home Start Researching Subject Guides Services About Us
Skip to Main Content

Library Research Level 3

Librarians & "AI"

Librarians focus on issues related to accuracy in information, equitable access to information, intellectual freedom, the right to privacy and confidentiality, intellectual property rights, and knowledge systems. This is why many librarians have concerns about generative AI. The widespread use of generative AI also results in existential environmental threats and labor abuses globally.

This page has some ideas and issues to consider as you build your library research skills. 

"AI" & Risks to the Library Research Process

Generative AI products should not be confused with search engines. They cannot be trusted to generate accurate writing or research, and users must painstakingly fact-check for misinformation and bias.

The tools also harm the knowledge systems that researchers rely on. Here are a few problems according to technology scholar Helen Beetham

  • Degradation of search and news publishing, introducing biases, inaccuracies, and fakery as data and content.
  • Plagiarization of content while driving traffic away from organizations that create the data and content. (Scholarly publishers, libraries, archives, heritage resources and public knowledge projects are key targets.)
  • Disruption of long-established norms such as transparency and replicability of scientific discovery, open peer review, and accountability in authorship.
  • Companies like OpenAI support rolling back copyright laws, threatening creative and cultural work. 

LLM "research assistants" harm student researchers. According to Rowen University professor, Tiffany DeRewal, "LLM research assistants purport to 'streamline'—or bypass—the challenging, recursive, and often messy work of identifying, evaluating, analyzing, and synthesizing the existing research on a given topic." However these processes are integral to building a researcher's intellectual abilities and skills. From the standpoint of learning, "such process is more valuable than the final product."

Detecting Generative AI in Scholarly Articles

Unfortunately, some scholars irresponsibly use AI in their research. Sage Publications identifies red flags to look for as you evaluate scholarly information.

Writing Style Red Flags

AI writing often has a distinct "texture" or pattern that can be a giveaway. Here are some common signs:

  • Repetitive Phrasing or Over-Explanation: AI tends to restate things, summarize what it's just said, or over-explain obvious points. For example, "It is important to note that peer review is important because it ensures quality, which is important." If you find yourself reading the same point multiple times, it might be AI at work.
  • Generic or Vague Language: AI sometimes avoids specifics and relies on filler phrases like, "This is an important area of research," "More research is needed," or "Studies have shown…" without citing real studies. This lack of detail can be a red flag.
  • Flawless but Empty Prose: AI-generated text may read smoothly but feel oddly superficial — like it’s saying a lot without really saying anything. It’s not always about poor grammar. Sometimes it’s about “lifeless prose,” which has words stitched together that have no meaning.

It is important to note that poor grammar or writing does not necessarily point to the use of AI.

Citation Red Flags

AI often fabricates or mishandles citations. Here’s what to look out for:

  • Non-existent References: Often called “phantom references,” these look real at first glance but don't exist in databases like PubMed, Scopus, or Google Scholar. Always verify references to ensure they are legitimate.
  • Incorrect DOIs or Journal Titles: AI might assign the wrong journal to a given article or provide incorrect volume/issue/page numbers. Double-check these details to catch any discrepancies

Content/Logic Red Flags

AI can struggle with nuanced argumentation. Here are some signs:

  • Contradictions Within the Text: Saying one thing in one paragraph and the opposite later can indicate AI-generated content. Consistency is key in academic writing.
  • Misuse of Technical Terms: Using jargon incorrectly or awkwardly is a common AI mistake. If the technical language feels off, it might be worth a closer look.
  • Overly Uniform Structure: Every paragraph starts the same way or follows an oddly rigid pattern. This lack of variation can be a sign of AI involvement.

Generative AI & Your Rights as a Learner

Kathryn Conrad, University of Kansas professor and author of Blueprint for an AI Bill of Rights, elaborated rights for students which have been adopted by numerous institutions. Here is a summary of her "blueprint":

  • Safe and Effective Systems: You should be protected from unsafe or ineffective systems.  
  • Algorithmic Discrimination Protections: You should not face discrimination by algorithms and systems should be used and designed in an equitable way.  
  • Data Privacy: You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.  
  • Notice and Explanation: You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.  
  • Human Alternatives, Consideration, and Fallback: You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter. 

Don't ride this bike!

Error-filled image of tandem bike created with GPT-5