How AGI (AI with an IQ) may impact humanity
In 2015, Stephen Hawking, Elon Musk, and a host of AI researchers signed the “Open Letter on Artificial Intelligence,” calling for research on the societal impacts of AI. They and others believed that superhuman artificial intelligence could provide incalculable benefits but could also end the human race if deployed carelessly. The letter called for concrete research on potential pitfalls and ways to “mitigate existential risks facing humanity” that could come from AI.
Back in 2015, though, most people were barely even aware of AI. It powered our Alexa units and provided autocomplete on our phones. Other AI systems ran some customer support call systems and checked our grammar in word processors. At the time, few people outside the tech and engineering sectors gave AI much thought.
Then, DALL-E and Midjourney appeared, followed by ChatGPT. Now, AI-generated content is everywhere.
And along with the omnipresent AI is a flood of articles and news reports warning us that AI is dangerous. Governments are discussing whether or not to regulate AI systems, and companies are putting guardrails around AI output. The problem is that most of these reports don’t distinguish between generative AI and more intelligent AI known as Artificial General Intelligence, or AGI.
Just how dangerous is AI? Are all the reports correct, or is it sensationalist fearmongering? Let’s take a look.
The different types of AI
Before we get too deep into this discussion, it might be helpful to define our terminology.
- Generative AI, or large language models (LLMs) such as ChatGPT, creates output that sounds like a person wrote it, but without the AI having any inherent understanding of the content or the ability to act upon it. (This is where AI technology is now.)
- Artificial general intelligence (AGI)— also called autonomous AI — comes closer to reasoning like a person than LLMs and can connect different pieces of information to learn and evolve.
- Sentient AI is a computer that believes it’s a person (or something akin to a human being), may express original thoughts or emotions, and can make autonomous decisions without any human input or interaction.
It’s worth noting that almost no one in the computing or AI fields believes ChatGPT (or other AI that we’re seeing in the wild) has reached AGI status, much less sentience.
Blake Lemoine, an engineer working with Google’s Language Model for Dialogue Applications (LaMDA), claimed that Google’s model was sentient. He drew his conclusions based on the AI’s responses within extended conversations, in which the AI responded with comments that seemed to evoke emotion and personality. After Lemoine made those claims, Google fired him, and most engineers now recognize that Lemoine had misinterpreted the results of those interactions.
Other than Lemoine, most researchers accept that generative AI is just remixing existing visual or written content, albeit in clever and surprising ways that no one fully understands.
In a humorous piece about AI, author Chuck Wendig explained our current AI tech both eloquently and accurately:
“AI is not intelligent. The intelligence is not merely artificial; it is artifice. Fake. A puppet, a simulacrum, a wax statue. It’s a mimic, worst of all. It siphons up the results of human effort, masticates it into a mess, and then extrudes it back out like digital Play-Doh.”
From this standpoint, there isn’t a solid reason to fear the technology behind generative AI, as ChatGPT isn’t going to set off a new nuclear arms race (probably).
However, the people who use generative AI have already shown its potential to assist in writing malicious code, and the tech itself can lead to job losses, misinformation, and other negative outcomes, so how people use the technology is certainly a concern.
How intelligent is AI today?
When discussing AI, the ‘intelligence’ it possesses becomes a pivotal differentiator.
As noted by neuroscientist and writer Erik Hoel, “intelligence is the most dangerous quality in existence.” Hoel argues that intelligence is an even greater danger than atomic bombs, as human intelligence led to the creation of such weapons.
And as AI continues to evolve, its capacity to aid or harm magnifies. In fact, OpenAI, the team behind ChatGPT, recently stated that “now is a good time to start thinking about the governance of superintelligence.”
If the measure of AI’s potential for harm is measured by its intelligence, it’s worth asking: just how intelligent is AI these days?
According to most experts, AI technology today isn’t anywhere near AGI levels, but it’s getting better all the time.
- AI platforms: OpenAI has projected a timeline of a decade before AI surpasses human intelligence. Meanwhile, Kanjun Qiu, CEO of Generally Intelligent, highlighted the challenges AI currently faces in executing tasks simple for humans, such as coordinating a meeting.
- Scientists and AI engineers: Researchers from Stanford, as described by IEEE Spectrum, agree, stating that generative AI is far from AGI and dismissing contrary beliefs as “wishful thinking.”
- The Coffee Test: Jurgen Gravestein, who teaches AI to talk, explained how the ‘Coffee Test’ devised by Apple’s Steve Wozniak can be instructive about the state of AI. This test assesses if a machine can navigate a household to brew coffee, something simple for an average human but difficult for a machine.
Based on these and many other experts, AI isn’t intelligent enough today to become autonomous or a threat. But what about the future?
The inevitability of AGI
We might not have to contend with AGI just yet, but it will be here eventually. With generative AI everywhere, the race is on, and there’s no forcing the genie back into its bottle now.
For instance, Hoel pointed out that ChatGPT can already provide more detailed responses than Wikipedia, despite its offline nature. And as tech giants like Google and Microsoft further their AI endeavors, tinkering with the tech for search engines, ads, and more, large language models will get even better and less prone to errors.
Industry insiders, including Qiu, predict the imminent release of systems capable of autonomously executing coding or marketing tasks, with some suggesting they could be released within a year.
Should we fear AGI’s arrival?
With all this talk about truly intelligent AI, it may sound like we’re moving into a War Games or Terminator scenario.
Let’s not panic. As we’ve seen from ChatGPT, it can be used for destructive things, but it can also be helpful. It makes search engines more useful, it can help content creators do their work more efficiently, and the potential for other beneficial uses — from education to language translation to assisting those without vision — is endless.
In a blog post from February, representatives from OpenAI wrote:
“If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.
AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity.”
If AGI has the potential for such good, isn’t it worth pursuing?
Addressing AGI’s imminent arrival
It’s important to not sugarcoat things where AI is concerned. AGI does pose a number of dangers, leading many AI experts to argue that we need to create new laws and procedures for how to use existing tech and how to develop future AI safely.
As Aravind Srinivas, CEO of Perplexity AI, told Reuters, “There’s so many ways it can go wrong. You have to treat AI like a baby and constantly supervise it like a mom.”
OpenAI described the possibility of AGI leading to “a serious risk of misuse, drastic accidents, and societal disruption.” Meanwhile Hoel argued,
“It’s totally imaginable scenario that large language models stall at a level that is dumber than human experts on any particular subject, and therefore they make great search engines but only mediocre AGIs, and everyone makes a bunch of money. But AI research won’t end there. It will never end, now it’s begun.”
It’s no surprise, then, that most leading experts argue that we need to pressure AI companies to take things slower and consider the implications of each new release.
Geoffrey Hinton, the man some call the ‘Godfather of AI,’ quit his role at Google in May and began to spread the word about the potential dangers AI poses to humanity. Along these lines, Wired described how Joep Meindertsma, head of a database company, created the grassroots group Pause AI to help spread the same message.
Even OpenAI has embraced the idea of regulation, stating, “society and the developers of AGI have to figure out how to get it right.”
And regulation shouldn’t be impossible. Because major AI systems are extremely expensive to run, few companies have the resources to build an AGI system other than big tech — such as Google, Microsoft, or Amazon. Because of this limitation, it should theoretically be easy enough to regulate those companies and their AI output.
After all, it should be much simpler for governments to deal with AI than with foreign countries. As Hoel reminded his readers, “Microsoft does not have an army.”
What does the future hold?
It might feel like we’re living in a science-fiction future with AI everywhere. The age of robots taking over the planet might feel imminent. But not so fast.
Even if AI is a long way off from a dystopian future in which our technology abolishes humanity, the race is on to build faster, smarter AI. Will it happen in our lifetimes? Even more importantly, should we fear AI
It really depends on who you ask.
If you’d like to drill down into the scientific community and different opinions on AI, IEEE Spectrum offers a ‘scorecard’ indicating where 22 of the world’s experts on computing and AI stand on the question of where AI is and where it’s headed.
What do you think? Will AI take over and destroy humanity, or will humans get this right and keep AI on a leash? We’d love to hear from you in the comments.