Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529
Three suggestions for how to mitigate the risks to data privacy posed by the development and adoption of AI:
1. Denormalize data collection by default by shifting away from opt-out to opt-in data collection. Data collectors must facilitate true data minimization through “privacy by default” strategies and adopt technical standards and infrastructure for meaningful consent mechanisms.
2. Focus on the AI data supply chain to improve privacy and data protection. Ensuring dataset transparency and accountability across the entire life cycle must be a focus of any regulatory system that addresses data privacy.
3. Flip the script on the creation and management of personal data. Policymakers should support the development of new governance mechanisms and technical infrastructure (e.g., data intermediaries and data permissioning infrastructure) to support and automate the exercise of individual data rights and preferences.
Rethinking Privacy in the AI Era: Policy Provocations for a Data-Centric World by Jennifer King, Caroline Meinhardt, Stanford University Human-Centered Artificial Intelligence, 2024.
The dangers of speaking out about AI were highlighted in an open letter this week (June 4, 2024) by a handful of current and former employees from OpenAI, Google DeepMind and Anthropic. Whistleblower protections are out of date and patchwork.
“I fear it will take a catastrophe to drive both greater oversight and stronger whistleblower protections for tech-sector whistleblowers,” said Dana Gold, senior counsel and director of advocacy and strategy at the Government Accountability Project, a whistleblower protection and advocacy organization. “We should be very grateful to the AI employees who are speaking out now to prevent one, and we should all decry any reprisal they suffer,” she said.
Given the unlikelihood of a legislative fix, Gold said, "the tech industry can lead in implementing professional ethics standards like engineers, lawyers and doctors have to regulate the tech industry as a profession."
As part of that ethics standard, tech employers can make "contractual commitments to zero-tolerance for retaliation," Gold said (Thibodeau, 2024).
The authors of the open letter asked for AI companies to commit to the following principles—
We therefore call upon advanced AI companies to commit to these principles:
Thibodeau, P. (2024, June 6). Catastrophic AI risks highlight need for whistleblower laws. TechTarget. https://www.techtarget.com/searchhrsoftware/news/366588073/Catastrophic-AI-risks-highlight-need-for-whistleblower-laws
A single ChatGPT conversation uses about16 ounces (fifty centilitres) of water (C. Gordon, Forbes).
"Air pollution and carbon emissions are well-known environmental costs of AI. But, a much lesser-known fact is that AI models are also water guzzlers. They consume fresh water in two ways: onsite server cooling (scope 1) and offsite electricity generation (scope 2)."
Ren, S. (2023, November 30). How much water does AI consume? The public deserves to know. OECD.AI Policy Observatory; Organisation for Economic Co-operation and Development.
ChatGPT consumes over half a million kilowatts of electricity each day, an amount staggering enough to service about two hundred million requests.
ChatGPT's daily power usage is nearly equal to 180,000 U.S. households, each using about twenty-nine kilowatts. (C. Gordon, Forbes)
Data center power requirements are growing exponentially due to AI. This has led Microsoft to make a deal to resurrect Three Mile Island, best known for its 1979 reactor meltdown.
Microsoft deal propels Three Mile Island restart, with key permits still needed. (2024, September 21). Reuters.