Fearing “loss of control,” AI critics call for 6-month pause in AI development


An AI-generated image of a globe that has stopped spinning.
Enlarge / An AI-generated image of a globe that has stopped spinning.
Stable Diffusion

reader comments
154 with

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to “immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.” Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat’s advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

Along these lines, the Future of Life Institute argues that recent advancements in AI have led to an “out-of-control race” to develop and deploy AI models that are difficult to predict or control. They believe that the lack of planning and management of these AI systems is concerning and that powerful AI systems should only be developed once their effects are well-understood and manageable. As they write in the letter:

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

In particular, the letter poses four loaded questions, some of which presume hypothetical scenarios that are highly controversial in some quarters of the AI community, including the loss of “all the jobs” to AI and “loss of control” of civilization:

multimodal or large language model. In addition, OpenAI has specifically avoided publishing technical details about how GPT-4 works.

The Future of Life Institute is a nonprofit founded in 2014 by a group of scientists concerned about existential risks facing humanity, including biotechnology, nuclear weapons, and climate change. In addition, the hypothetical existential risk from AI has been a key focus for the group. According to Reuters, the organization is primarily funded by the Musk Foundation, London-based effective altruism group Founders Pledge, and Silicon Valley Community Foundation.

Notable signatories to the letter confirmed by a Reuters reporter include the aforementioned Tesla CEO Elon Musk, AI pioneers Yoshua Bengio and Stuart Russell, Apple co-founder Steve Wozniak, Stability AI CEO Emad Mostaque, and author Yuval Noah Harari. The open letter is available for anyone on the Internet to sign without verification, which initially led to the inclusion of some falsely added names, such as former Microsoft CEO Bill Gates, OpenAI CEO Sam Altman, and fictional character John Wick. Those names were later removed.

Article Tags:
Article Categories:
Technology