From the Open Philanthropy site I came across this older (2020) Vox article, The case for taking AI seriously as a threat to humanity by Kelsey Piper. The article nicely summarizes some of the history of concerns around AGI (Artificial General Intelligence) as people tend to call an AI so advanced it might be comparable to human intelligence. This history goes back to Turing’s colleague I.J. Good who speculated in 1965 that,
An ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
Such an explosion has been called the Singularity by Vernor Vinge and was popularized by Ray Kurzweil.
I came across this following threads on the whole issue of whether AI would soon become an existential threat. The question of the dangers of AI (whether AGI (Artificial General Intelligence) or just narrow AI) has gotten a lot of attention especially since Geoffrey Hinton ended his relationship with Google so he could speak about it. He and other signed a short statement published on the site of the Center for AI Safety,
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
The existential question only become relevant if one believes, as many do, that there is considerable risk that AI research and development is moving so fast that it may soon achieve some level of generality at which point such an AGI could begin act in unpredictable and dangerous ways. Alternatively people could misuse such powerful AGIs to harm us. Open Philanthropy is one group that is focused on Potential Risks form Advanced AI. They could be classed as an organization with a longtermism view, a view that it is important to ethics (and philanthropy) to consider long-term issues.
Advances in AI could lead to extremely positive developments, but could also potentially pose risks from intentional misuse or catastrophic accidents.
Others have called for a Manhattan Project for AI Safety. There are, of course, those (including me) that feel that this is distracting from the immediate unintended effects of AI and/or that there is little existential danger for the moment as AGI is decades off. The cynic in my also wonders how much the distraction is intentional as it both hypes the technology (its dangerous therefore it must be important) or justifies ignoring the stubborn immediate problems like racist bias in the training data.
Kelsey Piper has in the meantime published A Field Guide to AI Safety.
The question still remains whether AI is dangerous enough to merit the sort of ethical attention that nuclear power, for example, has recieved.