What is artificial general intelligence, and is it a useful concept?


If you take even a passing interest in artificial intelligence, you will inevitably have come across the notion of artificial general intelligence. AGI, as it is often known, has ascended to buzzword status over the past few years as AI has exploded into the public consciousness on the back of the success of large language models (LLMs), a form of AI that powers chatbots such as ChatGPT.

That is largely because AGI has become a lodestar for the companies at the vanguard of this type of technology. ChatGPT creator OpenAI, for example, states that its mission is “to ensure that artificial general intelligence benefits all of humanity”. Governments, too, have become obsessed with the opportunities AGI might present, as well as possible existential threats, while the media (including this magazine, naturally) report on claims that we have already seen “sparks of AGI” in LLM systems.

Despite all this, it isn’t always clear what AGI really means. Indeed, that is the subject of heated debate in the AI community, with some insisting it is a useful goal and others that it is a meaningless figment that betrays a misunderstanding of the nature of intelligence – and our prospects for replicating it in machines. “It’s not really a scientific concept,” says Melanie Mitchell at the Santa Fe Institute in New Mexico.

Artificial human-like intelligence and superintelligent AI have been staples of science fiction for centuries. But the term AGI took off around 20 years ago when it was used by the computer scientist Ben Goertzel and Shane Legg, cofounder of…



Source link

Add a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Advertisement