Elon Musk, Yoshua Bengio, Steve Wozniak, Andrew Yang, And 1100+ More Signed An Open Letter Asking All AI Labs To Immediately Stop For At Least 6 Months

Over 1,100 signatories, including Elon Musk, Steve Wozniak, Stuart Russell, Yoshua Bengio, Andrew Yang, and 1100+ have signed an open letter urging “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

The letter claims that there is a “level of planning and management” that is currently “not happening” and that instead, “AI labs” have been engaged in an “out-of-control race” to develop and deploy ever-more powerful digital minds in recent months that even their creators cannot understand, predict, or control reliably.

Contemporary AI systems are now becoming human-competitive at general tasks,[3] and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.

Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.

Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

The signatories of the letter, some of whom are AI experts, emphasize that the proposed pause should be “public and verifiable and include all key actors.” If the pause “cannot be enacted quickly, governments should step in and institute a suspension,” according to the letter.

The letter is noteworthy not only for those who have signed it, including engineers from Meta and Google, Emad Mostaque, founder and CEO of Stability AI, and non-tech professionals like an electrician and an esthetician but also for those who have not. For instance, no one from OpenAI, the organization responsible for the massive language model GPT-4, has signed this letter, nor has anyone from Anthropic, whose team split from OpenAI to develop a “safer” AI chatbot.

Some Of The Signatories Are:

  • Yoshua Bengio, University of Montréal, Turing Laureate for developing deep learning, head of the Montreal Institute for Learning Algorithms
  • Stuart Russell, Berkeley, Professor of Computer Science, director of the Center for Intelligent Systems, and co-author of the standard textbook “Artificial Intelligence: a Modern Approach”
  • Elon Musk, CEO of SpaceX, Tesla & Twitter
  • Steve Wozniak, Co-founder, Apple
  • Yuval Noah Harari, Author and Professor, Hebrew University of Jerusalem.
  • Andrew Yang, Forward Party, Co-Chair, Presidential Candidate 2020, NYT Bestselling Author, Presidential Ambassador of Global Entrepreneurship
  • Connor Leahy, CEO, Conjecture
  • Jaan Tallinn, Co-Founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute
  • Evan Sharp, Co-Founder, Pinterest
  • Chris Larsen, Co-Founder, Ripple
  • Emad Mostaque, CEO, Stability AI
  • Valerie Pisano, President & CEO, MILA
  • John J Hopfield, Princeton University, Professor Emeritus, inventor of associative neural networks
  • Rachel Bronson, President, Bulletin of the Atomic Scientists
  • Max Tegmark, MIT Center for Artificial Intelligence & Fundamental Interactions, Professor of Physics, president of Future of Life Institute
  • Anthony Aguirre, University of California, Santa Cruz, Executive Director of Future of Life Institute, Professor of Physics
  • Victoria Krakovna, DeepMind, Research Scientist, co-founder of Future of Life Institute
  • Emilia Javorsky, Physician-Scientist & Director, Future of Life Institute
  • Sean O’Heigeartaigh, Executive Director, Cambridge Centre for the Study of Existential Risk
  • Tristan Harris, Executive Director, Center for Humane Technology
  • Marc Rotenberg, Center for AI and Digital Policy, President
  • Nico Miailhe, The Future Society (TFS), Founder and President
  • Zachary Kenton, DeepMind, Senior Research Scientist
  • Ramana Kumar, DeepMind, Research Scientist

Check out full list from here.

Related Stories:

Help Someone By Sharing This Article