Sam Altman, Geoffrey Hinton, Demis Hassabis And Others Warn Against Risk Of Extinction Due To AI In A New Open Letter

Top leading AI researchers, engineers, and CEOs, including Sam Altman, Geoffrey Hinton, Demis Hassabis, and others, have raised new concerns about the existential danger AI poses to humanity.

22 word AI warning statement by Top AI Researchers and ceos

Their concise statement, reduced to 22 words, stresses the necessity of prioritizing global efforts to mitigate the risk of AI-induced extinction, on par with addressing pandemics and nuclear war.

The San Francisco-based non-profit Center for AI Safety published a statement co-signed by Google DeepMind CEO Demis Hassabis, OpenAI CEO Sam Altman, Geoffrey Hinton, and Youshua Bengio.

The statement adds to the ongoing and contentious discussion surrounding AI safety. Some of the signatories had previously endorsed a different open letter that advocated for a six-month “pause” in AI development.

This letter received criticism from various perspectives. Some experts believed it exaggerated the risks associated with AI, while others acknowledged the risks but disagreed with the suggested solution in the letter.

Dan Hendrycks, who leads the Center for AI Safety, told NYT that they made the statement short on purpose. The statement doesn’t offer any specific ways to reduce the risks of AI. Hendrycks explained that they didn’t want to propose too many options, as it would weaken the main point they wanted to convey.

Hendrycks said that the message is like a “coming out” for industry figures concerned about AI’s risks. He explained that there is a misconception that only a few people in the AI community are worried, but in reality, many individuals privately express their concerns about these issues.

The specific details of this debate can be long and endless. It revolves around imagined situations where AI systems become more capable and stop being safe. Some experts highlight the rapid advancements in systems like large language models to support their belief that AI will become even more intelligent in the future. They argue that once AI systems become highly sophisticated, it might become impossible to control what they do.

Some people don’t believe these forecasts. They mention how AI systems struggle with even simple tasks like driving a car. Despite many years of research and huge amounts of money invested, fully self-driving cars are still not a widespread reality. Skeptics argue that if AI can’t overcome this challenge, it will unlikely achieve everything humans can do shortly.

At the same time, both supporters and doubters of AI risk acknowledge that AI systems pose various threats in the present, even without any advancements in their abilities. These threats include enabling mass surveillance, contributing to flawed “predictive policing” algorithms, and facilitating the spread of misinformation and disinformation.

Related Stories:

Help Someone By Sharing This Article