We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.
The Future of Life Institute is calling on AI labs to pause with a letter signed by over 1000 people (including myself), Pause Giant AI Experiments: An Open Letter – Future of Life Institute. The letter asks for a pause so that safety protocols can be developed,
AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.
This letter to AI labs follows a number of essays and opinions that maybe we are going too fast and should show restraint. This in the face of the explosive interest in large language models after ChatGPT.
- Gary Marcus wrote an essay in his substack on “AI risk ≠ AGI risk” arguing that just because we don’t have AGI doesn’t mean there isn’t risk associated with the Mediocre AI systems we do have.
- Yuval Noah Harari has an opinion in the New York Times with the title, “You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills” where he talks about the dangers of AIs manipulating culture.
We have summoned an alien intelligence. We don’t know much about it, except that it is extremely powerful and offers us bedazzling gifts but could also hack the foundations of our civilization. We call upon world leaders to respond to this moment at the level of challenge it presents. The first step is to buy time to upgrade our 19th-century institutions for a post-A.I. world and to learn to master A.I. before it masters us.
- Erik Hoel has a thorough substack essay on “I am Bing, and I am evil” which asks why we aren’t panicking.
- Geoffrey Hinton when asked if AI could wipe us out replied “I think it’s not inconceivable. That’s all I’ll say.“
- Even Elon Musk and Emas Mostaque (Stability AI) call for a pause.
It is worth wondering whether the letter will have an effect, and if it doesn’t, why we can’t collectively slow down and safely explore AI.