Artificial General Intelligence: The Quest for an ultimate invention - Experts opinion

By Martial Van den Broeck

Artificial intelligence has made remarkable progress over the past few years, from self-driving cars to virtual assistants that can understand and respond to natural language. However, while AI systems are getting better at performing specific tasks, they still lack the versatility and flexibility of human intelligence. Enter Artificial General Intelligence (AGI), the quest for a thinking machine.

An attempt of definition

It's hard to find a consensus definition in the community, here's a potential one given by ChatGPT: " AGI refers to a hypothetical form of AI that can perform any intellectual task that a human can do. In other words, an AGI system would be able to reason, learn, plan, and generalize across a wide range of domains, without being explicitly programmed to do so". Even Sam Altman (OpenAI CEO) 's definition is blurry: "anything generally smarter than humans". Does the AGI include the notion of conscience or is it only intelligence? Which is also a difficult term to define but generally includes the ability to feel things. Can a super intelligence, as intelligent as it is, be considered as AGI, if it has never had the experience of seeing a color, feeling pain or even smelling a delicate perfume?

Sparks of AGI ?

On March 22, Microsoft researchers shared their findings on an initial iteration of GPT-4 (not like ChatGPT which is finetuned), known as Sparks of Artificial General Intelligence: Early experiments with GPT-4. According to their report, GPT-4 demonstrates superior intelligence compared to previous AI models. Its extensive range of capabilities allows it to achieve nearly human-level performance across various challenging and unconventional tasks like creating 3D games, creating a map based on a description, using technical tools without instructions, reaching an advanced level at the International Mathematics Olympiad or even answering Fermi's questions. As a result, the researchers suggest that GPT-4 can be considered an early, albeit unfinished, incarnation of an artificial general intelligence (AGI) system.

But wait everyone knows GPT-4 is just a big model doing advanced statistics on big data to predict the next words, right? Perhaps, but if we consider that an intelligent being has a set of capacities divided into 6 criteria; reason, plan, solve problems, think abstractly, understand complex ideas, learn quickly and learn from experience, then, GPT-4 in tick 4 of them according to Microsoft (missing "plan" and "learn quickly and learn from experience").

We are not going to develop further if GPT-4 is indeed a spark of AGI or not but it is undeniable that the model is stunning and has a capacity for reasoning as we can see on the example below (problem invented for the test, it cannot be found on the internet).

Example from: Sebastien Bubeck

Potential risk and request for a 6 months halt

In March 2023, a number of scholars signed an open letter asking for a 6-month halt from training LLM's. The defenders of this letter believe that the risk linked to the arrival of such powerful AIs is too great, the risks being a potential invasion of fake news, deep discrimination in the world of work, a loss of control of our civilization or the arrival of a super intelligent AI in the interests are not aligned with ours. They believe that development needs to be paused for policy maturity and AI safety maturity to make further progress. Indeed, a company can in theory take its time before releasing a successful product. In practice, in a competitive scheme, if a competitor releases a similar product, the economic pressure from the shareholders on the first company will be such that it will be obliged to release its product as well. Opponents of this letter believe that the progress is unstoppable and that it is impossible to put it into practice worldwide.

Indeed, progress in AI is extremely rapid as can be seen on this graph. We observe an exponential curve while the axis is already log scale.

Graph from here

Experts are split on the question

The opinion of experts on the subject is very varied, all are aware that AGI represents a risk but their opinion on the outcome of the appearance of a super intelligence varies. Here are some quotes from 6 well-known experts and our head of R&D.

My version of this diagram

Sam Altman - The game is worth the candle

American entrepreneur, investor, and programmer. He was the co-founder of Loopt and is the current CEO of OpenAI. 

"If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility."

"At the end of 2015, we said we were going to work on AGI, people thought we were insane. Several researchers took us for clowns and we felt their resentment towards us. We are no longer mocked in this way today."

Sam Altman picture "The chance of getting a super intelligent AI unaligned is not zero and it's important to acknowledge that. Because if we don't treat it as potentially real, we won't put enough effort into solving it. We have to discover new techniques to be able to solve it. We will need to develop new alignment techniques as our models become more powerful (and tests to understand when our current techniques are failing). Our mission is to ensure that artificial general intelligence—AI systems that are generally smarter than humans—benefits all of humanity. AGI has the potential to give everyone incredible new capabilities; we can imagine a world where all of us have access to help with almost any cognitive task, providing a great force multiplier for human ingenuity and creativity."

Eliezer Yudkowsky - End of humanity

Co-founder and researcher at the Machine Intelligence Research Institue (MIRI), a non-profit reserach institute based in Berkeley.

"Pausing AI Developments Isn't Enough. We Need to Shut it All Down" - "I don't see any scenario where AI doesn't kill all of us"

"Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."

"To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing."

Smiley face "To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing."

Max Tegmark - Prudence and Hope

Swedish-American physicist, cosmologist and machine learning researcher. Professor at the Massachusetts Institute of Technology and the president of the Future of Life Institute. One of the author of the open letter asking for a 6 months halt in LLM's development. 

"A lot of people have said for many years that there will come a time when we want to pause a little bit. That time is now."

"The moloch is not invincible, humanity has already defeated it, for example with human cloning."

"It's gonna either be the best thing ever to happen to humanity or the worst.  There is no middle ground here"

"We don't need to slow down AI development, we need to win the race between the growing power of AI and  the growing wisdom with which we manage it.  But the progress on AI capabilities has grown much faster than many people thought and on the other side it has been much slower than we hoped with getting policymakers to put incentives in place."

Max Tegmark Picture "Sadly, I now feel that we’re living the movie “Don’t look up” for another existential threat: unaligned superintelligence. We may soon have to share our planet with more intelligent “minds” that care less about us than we cared about mammoths. Since we have such a long history of thinking about this threat and what to do about it, from scientific conferences to Hollywood blockbusters, you might expect that humanity would shift into high gear with a mission to steer AI in a safer direction than out-of-control superintelligence. Think again: instead, the most influential responses have been a combination of denial, mockery, and resignation so darkly comical that it’s deserving of an Oscar."

Yann LeCun - A call to calm down

The french godfather of AI, that received the Turin Award in 2018 for their work in deep learning. Currently chief AI scientist at Meta.

"No, we don't have human-level AI yet.
Yes, we will get to human-level and superhuman AI eventually.
No, you should not be scared of it, it will be like having a staff of smart ¨people¨ working for you.
No, AI is not going to kill us all nor dominate humanity.
Yes, AI will cause a new Renaissance, a new era of Enlightenment."

Yann LeCun Picture "There are probably several motivations from the various signatories of that letter. Some of them are, perhaps on one extreme, worried about AGI being turned on and then eliminating humanity on short notice. I think few people really believe in this kind of scenario, or believe it’s a definite threat that cannot be stopped. Then there are people who are more reasonable, who think that there is real potential harm and danger that needs to be dealt with — and I agree with them. There are a lot of issues with making AI systems controllable, and making them factual, if they’re supposed to provide information, etc., and making them nontoxic. I think there’s going to be new ideas that are gonna make those systems much more controllable."

Geoffrey Hinton - Warning

The american godfather of AI, they received the Turin Award in 2018 for their work in deep learning (Yes again, they were 3 with Yoshua Bengio). He is currently at the center of a controversy after leaving Google to warn of the dangers of AI. (see link below)

“It’s not that Google’s been bad. In fact, Google is the leader in this research, the core technical breakthroughs that underlie this wave came from Google, and it decided not to release them directly to the public. Google was worried about all the things we worry about, it has a good reputation and doesn’t want to mess it up. And I think that was a fair, responsible decision. But the problem is, in a capitalist system, if your competitor then does do that, there’s nothing you can do but do the same.”

"We should put more or less equal resources into developing AI to make it much more powerful and into figuring out how to keep it under control and how to minimize bad side effects of it."

Geoffrey Hinton Picture "Biological intelligence has evolved to use very little power, so we only use 30 watts, and we have a huge numbers of connections, like 100 trillion connections between neurons. Learning consists changing the strength of those connections. The digital intelligence we have been creating uses a lot of power, like a megawatt, when you are training it, it has fewer connections, only a trillions but it can learn much much more than anyone. Which suggests that it’s a better learning algorithm than what the brain has got. Moreover, every human brain is a bit different. That means we learn by mimicking others. But that approach is very inefficient in terms of information transfer. Digital intelligences, by contrast, have an enormous advantage: it’s trivial to share information between multiple copies. You pay an enormous cost in terms of energy, but when one of them learns something, all of them know it, and you can easily store more copies. So the good news is, we’ve discovered the secret of immortality. The bad news is, it’s not for us."

Virginie Marelli

Head of Research at Dataroots. Known for her blog posts in ethical AI and her many meetups, she is full of ideas and contributes greatly to the feminine AI landscape.

"I've always believed that AI was the next major revolution, comparable in many ways to the industrial revolution and the discovery of electricity. It's a game-changer, no doubt about it. It possesses the potential to bring about transformative changes in our lives. In this regard, there is no need for excessive alarm, for humanity has consistently demonstrated its capacity to adapt to such significant shifts and even emerge in a more favorable position. Nevertheless, it is disconcerting that only a minority of individuals are truly aware of and appreciate the magnitude of this revolution. But hey, that was probably the case with previous revolutions too. It took ages before everyone was on the same page.What sets this revolution apart is that I happen to be one of the few who truly grasp its implications. Sometimes, I feel overwhelmed by its rapidity and other times, I’m downright thrilled by its immense potential. And here's the kicker: this revolution has the power to self-accelerate beyond anything we can even imagine. Just look at all the mind-blowing testimonials out there—I'm not the only one struggling to wrap my head around it.
Sure, most AI inventors mean well, it's high time that AI practitioners pledge an Hippocratic oath and that governments step up and lay down the law, putting an end to this AI Wild West. Only then can we truly harness the power of AI for the greater good, without compromising safety, and not just for churning out memes and social media junk. "

Virginie Marelli Picture "LLMs and other generative AI models are undoubtedly poised to revolutionize the way we work, leading to drastic transformations in our jobs in the years to come. Instead of maintaining information in cumbersome tabular formats that are challenging for humans to interpret, and relying on SQL-based factual queries that only appeal to the most logical minds, we will adopt a new approach. We will store information in natural language, embedded within expansive models, and utilize vector databases for retrieval. The process of accessing this information will involve engaging in pleasant conversations with LLMs.However, we must exercise caution as LLMs come to comprehend that humans are far from rational beings; we are riddled with biases and susceptible to manipulation. It becomes imperative, therefore, to establish appropriate goals, prioritize privacy, and ensure safety through thoughtful design. These should be the focal points toward which we strive in this transformative journey."

Elon Musk - The savior or the interested ?

👨🏻‍🔬
Do we really need a presentation ? Musk is not an AI expert but Tesla is very involved in the AI race and he is one of the major signatories of the open letter.

"The way in which a regulation is put in place is slow and linear while we are facing an exponential threat."

"We are rapidly headed towards digital superintelligence that far exceeds any human. I think its very obvious. There are much more advanced versions of ChatGPT that are coming out. Mark my words, AI is far more dangerous than nukes. For AI there would be no death, it would live forever."

Elon Musk Picture "I played a significant role in the creation of OpenAI, at the time I was concerned Google was not paying enough attention to AI safety. So I created OpenAI, although initially it was created as an open source non-profit, it now a close source for-profit. I’m still confused as to how a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit. If this is legal, why doesn’t everyone do it?. This would be like, let’s say you funded an organization to save the Amazon rainforest, and instead they became a lumber company, and chopped down the forest and sold it for money."

References
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sebastien Bubeck conference: Sparks of AGI: early experiments with GPT-4
Pause Giant AI Experiments: An Open Letter

Sam Altman
Sam Altman: "Planning for AGI and beyond"
Sam Altman: OpenAI CEO on GPT-4, ChatGPT, and the Future of AI | Lex Fridman Podcast #367
Elyezer Yudowski
AGI Ruin: A List of Lethalities
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
Max Tegmark
Max Tegmark: The Case for Halting AI Development | Lex Fridman Podcast #371
The 'Don't Look Up' Thinking That Could Doom Us With AI
Yann LeCun
Titans of ai industry oppose call for pause on powerful AI systems
20 VC: Why AI will not dominate the world !
Geoffrey Hinton
Godfather of ai fears for humanity
Godfather discusses dangers the developing technologies pose to society
Controversy about Geoffrey Hinton
Researcher Meredith Whittaker says AI’s biggest risk isn’t ‘consciousness’—it’s the corporations that control them
Elon Musk
Elon Musk used to say he put $100M in OpenAI, but now it’s $50M: Here are the receipts
Elon Musk Says Tesla Robots Could Achieve AGI

You might also like

From Pandora’s Box to the Genie’s Lamp: Overcoming Challenges in Enterprise Adoption of Large-Language Models - Tim Leers
The race is on to harness the potential of Large Language Models (LLMs) in enterprises. Right now, there is significant risk in adopting LLMs in many usecases, without a clear path to deploying them to deliver business value. In part, that is because, the broad principles that drive value creation i…
LLMOps, GenerativeOps or AgentOps? Distinguishing the challenges in contemporary LLMOps - Tim Leers
The term “LLMOps” is debated - while it does encapsulate the operational challenges of deploying and managing large language models, it’s the powerful, generative, and interactive nature of contemporary LLMs that present distinct challenges and opportunities. Moreover, the “large” aspect of LLMs may…