OC logo
Libraries Home Start Researching Subject Guides Services About Us
Skip to Main Content

AI Literacy

Books & eBooks

Ethics Recommendations

Understanding artificial intelligence ethics and safety: A guide to AI ethics, including responsible design and implementation of AI systems in the public sector

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529

Privacy in an AI Era

Three suggestions for how to mitigate the risks to data privacy posed by the development and adoption of AI:

1. Denormalize data collection by default by shifting away from opt-out to opt-in data collection. Data collectors must facilitate true data minimization through “privacy by default” strategies and adopt technical standards and infrastructure for meaningful consent mechanisms.

2. Focus on the AI data supply chain to improve privacy and data protection. Ensuring dataset transparency and accountability across the entire life cycle must be a focus of any regulatory system that addresses data privacy.

3. Flip the script on the creation and management of personal data. Policymakers should support the development of new governance mechanisms and technical infrastructure (e.g., data intermediaries and data permissioning infrastructure) to support and automate the exercise of individual data rights and preferences.

 

Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World by Jennifer King, Caroline Meinhardt, Stanford University Human-Centered Artificial Intelligence, 2024.

Whistleblower Protections Needed

The dangers of speaking out about AI were highlighted in an open letter this week (June 4, 2024) by a handful of current and former employees from OpenAI, Google DeepMind and Anthropic. Whistleblower protections are out of date and patchwork.  

“I fear it will take a catastrophe to drive both greater oversight and stronger whistleblower protections for tech-sector whistleblowers,” said Dana Gold, senior counsel and director of advocacy and strategy at the Government Accountability Project, a whistleblower protection and advocacy organization. “We should be very grateful to the AI employees who are speaking out now to prevent one, and we should all decry any reprisal they suffer,” she said.

Given the unlikelihood of a legislative fix, Gold said, "the tech industry can lead in implementing professional ethics standards like engineers, lawyers and doctors have to regulate the tech industry as a profession."

As part of that ethics standard, tech employers can make "contractual commitments to zero-tolerance for retaliation," Gold said (Thibodeau, 2024).

The authors of the open letter asked for AI companies to commit  to the following principles—

We therefore call upon advanced AI companies to commit to these principles:

  1. That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
  2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise;
  3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
  4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.

 

Thibodeau, P. (2024, June 6). Catastrophic AI risks highlight need for whistleblower laws. TechTarget. https://www.techtarget.com/searchhrsoftware/news/366588073/Catastrophic-AI-risks-highlight-need-for-whistleblower-laws

Water Use Impacts

A single ChatGPT conversation uses about16 ounces (fifty centilitres) of water (C. Gordon, Forbes).

"Air pollution and carbon emissions are well-known environmental costs of AI. But, a much lesser-known fact is that AI models are also water guzzlers. They consume fresh water in two ways: onsite server cooling (scope 1) and offsite electricity generation (scope 2)."

An example of a data centre’s operational water usage: on-site scope-1 water for server cooling (via cooling towers in the example) and off-site scope-2 water usage for electricity generation.

Ren, S. (2023, November 30). How much water does AI consume? The public deserves to know. OECD.AI Policy Observatory; Organisation for Economic Co-operation and Development. 

Energy Use Impacts

ChatGPT consumes over half a million kilowatts of electricity each day, an amount staggering enough to service about two hundred million requests.

ChatGPT's daily power usage is nearly equal to 180,000 U.S. households, each using about twenty-nine kilowatts. (C. Gordon, Forbes)

Data center power requirements are growing exponentially due to AI. This has led Microsoft to make a deal to resurrect Three Mile Island, best known for its 1979 reactor meltdown.

Microsoft deal propels Three Mile Island restart, with key permits still needed. (2024, September 21). Reuters