Hundreds of notable individuals, including Nobel Prize winners, military leaders, artists, and members of the British royal family, came together to sign a statement.
This statement calls for a halt to any work that could potentially lead to superintelligent computers, a level of artificial intelligence that has not yet been achieved but could pose risks to humanity in the future.
The statement suggests that there should be a ban on developing superintelligence until there is a general agreement among scientists that it can be done safely and with public support.

This initiative was organized by AI researchers who are worried about the rapid advancements in technology. The statement has gathered over 800 signatures from a wide range of individuals.
Among the signers are Geoffrey Hinton, a Nobel laureate and AI researcher, former Joint Chiefs of Staff Chairman Mike Mullen, rapper Will.i.am, former Trump advisor Steve Bannon, and U.K. Prince Harry along with Meghan Markle.
This appeal contributes to a growing movement advocating for a slowdown in AI development at a time when AI is poised to significantly alter various sectors of the economy and culture.
Major tech companies like OpenAI, Google, and Meta are investing billions into new AI technologies and the infrastructure needed to support them, while many businesses are eager to incorporate AI into their products and services.
Some experts in AI believe that these systems are progressing quickly enough to soon achieve what is known as artificial general intelligence, which means they can perform tasks that require human-like intellect.
Following that, it is thought that superintelligence could emerge, where AI systems surpass even the most skilled humans in performance.
The statement was produced by the Future of Life Institute, a nonprofit organization focused on addressing large-scale risks like nuclear threats, biotechnology, and artificial intelligence.
One of its early supporters was tech entrepreneur Elon Musk, who is currently competing in the AI field with his startup xAI. The institute recently stated that its largest donor is Vitalik Buterin, a co-founder of Ethereum, and it does not accept money from major tech firms or those aiming to create artificial general intelligence.
Anthony Aguirre, the executive director of the institute and a physicist at the University of California, Santa Cruz, expressed concern that AI advancements are happening faster than the public can comprehend.
He remarked that the direction of AI development seems to have been decided by the companies and the economic system without sufficient public input.
Aguirre noted it is surprising that there hasn’t been more discussion about whether society actually wants AI systems that could replace humans.
He emphasized that the narrative has been to simply accept the trajectory of AI development and deal with the outcomes without considering the choices available in how these technologies are developed.
The statement does not target any specific organization or government. Aguirre hopes to spark a broader conversation that includes not just major AI companies but also political leaders in the U.S., China, and beyond. He pointed out that the pro-industry stance of the Trump administration regarding AI needs to be balanced with public opinion.
According to Aguirre, many people do not want to compete in a race for advanced AI technology. He suggested that there may eventually need to be an international agreement on advanced AI, similar to treaties for other dangerous technologies.
The White House did not reply immediately to a request for comments on the statement before its official release.
A recent poll indicates that Americans have mixed feelings about AI’s potential impact. While 44% of surveyed adults believe AI will improve their lives, 42% think it will worsen their futures.
Notably, top tech executives who have made predictions about superintelligence and indicated they are working towards it did not sign the statement.
For instance, Meta CEO Mark Zuckerberg mentioned in July that superintelligence is “now in sight.” Elon Musk tweeted in February that the emergence of digital superintelligence “is happening in real-time,” and he has previously warned about the dangers of robots. Meanwhile, Tesla, under Musk’s leadership, is working on humanoid robots.
OpenAI CEO Sam Altman expressed that he would be surprised if superintelligence did not emerge by 2030 and mentioned in a blog post that his company is focusing on this goal.
Several tech companies have not yet responded to requests for comments regarding the statement.
Recently, the Future of Life Institute informed reporters that OpenAI had issued subpoenas to it and its president, which they view as retaliation for the institute’s calls for AI oversight.
OpenAI’s Chief Strategy Officer stated that the subpoena was due to suspicions about the funding sources of nonprofit organizations that have criticized OpenAI’s changes.
Other notable signers of the statement include Apple co-founder Steve Wozniak, Virgin Group founder Richard Branson, conservative commentator Glenn Beck, former U.S. national security adviser Susan Rice, Nobel Prize-winning physicist John Mather, Turing Award recipient and AI researcher Yoshua Bengio, and Vatican AI adviser Rev. Paolo Benanti. Several AI researchers from China also added their names to the statement.
Aguirre mentioned that the aim is to gather a diverse range of supporters from across society.
He stressed the importance of this issue, stating it should not be viewed as a niche concern limited to tech enthusiasts in Silicon Valley. Instead, it is a significant topic that affects all of humanity.
Other Stories You May Like