Today’s most advanced AI models have many flaws, but decades from now, they will be recognized as the first true examples of artificial general intelligence.
Blaise Agüera Y Arcas and Peter Norvig have an essay making the argument that Artificial General Intelligence Is Already Here. Their point is that the latest machines like ChatGPT are far more general than previous narrow AIs. They may not be as general as a human, at least without embodiment, but they can do all sorts of textual tasks including tasks not deliberately programmed into them. Some of the ways they are general include their ability to deal with all sorts of topics, their ability to do different types of tasks, their ability to deal with different modalities (images, text …), their language ability, and instructability.
The article also mentions reasons why people are still reluctant to admit that we have a form of AGI:
“A healthy skepticism about metrics for AGI
An ideological commitment to alternative AI theories or techniques
A devotion to human (or biological) exceptionalism
A concern about the economic implications of AGI”
To some extent the goal post changes as AI’s solve different challenges. We used to think playing chess well was a sign of intelligence, now that we know how a computer can do it, it no longer seems a test of intelligence.