OC logo
Libraries Home Start Researching Subject Guides Services About Us
Skip to Main Content

AI Literacy

Books & eBooks

Ethics Recommendations

Understanding artificial intelligence ethics and safety: A guide to AI ethics, including responsible design and implementation of AI systems in the public sector

Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529

Whistleblower Protections Needed

The dangers of speaking out about AI were highlighted in an open letter this week (June 4, 2024) by a handful of current and former employees from OpenAI, Google DeepMind and Anthropic. Whistleblower protections are out of date and patchwork.  

“I fear it will take a catastrophe to drive both greater oversight and stronger whistleblower protections for tech-sector whistleblowers,” said Dana Gold, senior counsel and director of advocacy and strategy at the Government Accountability Project, a whistleblower protection and advocacy organization. “We should be very grateful to the AI employees who are speaking out now to prevent one, and we should all decry any reprisal they suffer,” she said.

Given the unlikelihood of a legislative fix, Gold said, "the tech industry can lead in implementing professional ethics standards like engineers, lawyers and doctors have to regulate the tech industry as a profession."

As part of that ethics standard, tech employers can make "contractual commitments to zero-tolerance for retaliation," Gold said (Thibodeau, 2024).

The authors of the open letter asked for AI companies to commit  to the following principles—

We therefore call upon advanced AI companies to commit to these principles:

  1. That the company will not enter into or enforce any agreement that prohibits “disparagement” or criticism of the company for risk-related concerns, nor retaliate for risk-related criticism by hindering any vested economic benefit;
  2. That the company will facilitate a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise;
  3. That the company will support a culture of open criticism and allow its current and former employees to raise risk-related concerns about its technologies to the public, to the company’s board, to regulators, or to an appropriate independent organization with relevant expertise, so long as trade secrets and other intellectual property interests are appropriately protected;
  4. That the company will not retaliate against current and former employees who publicly share risk-related confidential information after other processes have failed. We accept that any effort to report risk-related concerns should avoid releasing confidential information unnecessarily. Therefore, once an adequate process for anonymously raising concerns to the company’s board, to regulators, and to an appropriate independent organization with relevant expertise exists, we accept that concerns should be raised through such a process initially. However, as long as such a process does not exist, current and former employees should retain their freedom to report their concerns to the public.

 

Thibodeau, P. (2024, June 6). Catastrophic AI risks highlight need for whistleblower laws. TechTarget. https://www.techtarget.com/searchhrsoftware/news/366588073/Catastrophic-AI-risks-highlight-need-for-whistleblower-laws