A counterpoint has been written by a group of prominent AI ethicists in response to a recent letter requesting a six-month “pause” on AI development. The ethicists criticized the letter’s focus on hypothetical future threats, arguing that the misuse of the technology already causes real harms.
The open letter from the Future of Life Institute, signed by thousands of people, including Elon Musk, Andrew Yang, Stuart Russell, Steve Wozniak, and 1100+ others, proposed a pause on the development of AI models such as GPT-4. The letter cited concerns about a potential “loss of control of our civilization,” among other risks.
Timnit Gebru, Emily M. Bender, Angelina McMillan-Major, and Margaret Mitchell, who are all well-known figures in the fields of AI and ethics, co-authored the counterpoint. They were previously pushed out of Google due to a paper that criticized the capabilities of AI. Currently, they are collaborating at the DAIR Institute, a new research institution focused on studying and preventing AI-related harms.
However, the four ethicists who co-authored the counterpoint were not among the letter’s signatories. They have since published a rebuke, criticizing the letter’s failure to address the existing problems caused by AI technology.
In their response, the ethicists stated that the letter’s focus on hypothetical future risks was based on a dangerous ideology called longtermism, disregarding the actual harms caused by AI systems today. They cited examples such as worker exploitation, data theft, the use of synthetic media to support existing power structures, and the further concentration of Power in the hands of a few.
The ethicists argued that the concerns about a potential Terminator or Matrix-like robot apocalypse were a distraction from the real issues. They pointed out that reports of AI technology misuse, such as Clearview AI being used by the police to frame an innocent man, were already occurring. They emphasized that there was no need to worry about a T-1000 when technologies like Ring cameras could be accessed through online rubber-stamp warrant factories.
The DAIR Institute agrees with some of the objectives outlined in the open letter, such as identifying synthetic media. However, they stress the importance of taking action now to address the problems caused by AI, using currently available solutions:
What we need is regulation that enforces transparency. Not only should it always be clear when encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures. The onus of creating safe tools should be on the companies that build and deploy generative systems, which means that builders of these systems should be made accountable for the outputs produced by their products.
The current race towards ever larger “AI experiments” is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive. The actions and choices of corporations must be shaped by regulation that protects the rights and interests of people.
It is indeed time to act: but the focus of our concern should not be imaginary “powerful digital minds.” Instead, we should focus on the very real and present exploitative practices of the companies claiming to build them, who are rapidly centralizing Power and increasing social inequities.
This sentiment was also expressed by Jessica Matthews, the founder of Uncharted Power, at the recent AfroTech event in Seattle. She emphasized that people should not be afraid of AI but rather the individuals who are developing it.
It is highly unlikely that any major company would agree to pause its AI research in response to the open letter. Nevertheless, the level of engagement it has received demonstrates that both real and hypothetical risks associated with AI are a significant concern across many sectors of society. If these companies refuse to act, someone else may need to do so on their behalf.