AI can save humanity, but only if the people control it — Ben Goertzel


With the recent release of the iPhone 16, which Apple has promised is optimized for artificial intelligence, it’s clear that AI is officially front of mind, once again, for the average consumer. Yet the technology still remains rather limited compared with the vast abilities the most forward-thinking AI technologists anticipate will be achievable in the near future.

As much excitement as there still is around the technology, many still fear the potentially negative consequences of integrating it so deeply into society. One common concern is that a sufficiently advanced AI could determine humanity to be a threat and turn against us all, a scenario imagined in many science fiction stories. However, according to a leading AI researcher, most people’s concerns can be alleviated by decentralizing and democratizing AI’s development.

On Episode 46 of The Agenda podcast, hosts Jonathan DeYoung and Ray Salmond separate fact from fiction by speaking with Ben Goertzel, the computer scientist and researcher who first popularized the term “artificial general intelligence,” or AGI. Goertzel currently serves as the CEO of SingularityNET and the ASI Alliance, where he leads the projects’ efforts to develop the world’s first AGI.

The true power of artificial general intelligence

Goertzel defined an AGI as “an AI that can do the whole scope of everything that people can do, including the human ability to leap beyond what we’ve been taught.” This differs from the more narrow AIs that have been around for a while, which “do some highly specific things but don’t try to have the whole broad scope of a human mind.”

While large language models like ChatGPT are capable of performing many general tasks, they don’t qualify as artificial general intelligence because they don’t venture far beyond their training, said Goertzel.