By Romain Compagnie
AI, sentience and Google LaMDA
If you've followed the tech-related news recently, you probably heard about Google's latest conversational bot, called LaMDA (Language Model for Dialogue Applications). After a lengthy conversation with the bot, a Google engineer became convinced that the bot they designed was sentient, or conscious in more common language. His claim and the public release of the conversation sparked heated debate among AI experts and enthusiasts.
This is the perfect opportunity for me to give you some context to understand the discussion, whether you're already familiar with the AI jargon or simply interested in this important scientific and societal topic. I'll first discuss the goals and history of AI. Then I'll introduce you to the concept of sentience. After that we'll do a quick analysis of Google LaMDA's controversy. Finally we'll see why the discussion about whether machines are conscious needs to happen.
What is AI? A bit of history.
Centuries and even millennia ago, humans began thinking about something that could perform similar tasks and behave similarly as a human without actually being human. The idea of artificial intelligence, abbreviated AI, started to emerge.
Back to the Antiquity, the idea of an automaton had already surfaced. In the Greek mythology, Talos was a giant bronze robot that protected the island of Crete, by throwing rocks at unwelcome boats approaching the seashore. Later, other stories will involve human-like creatures. In 1726, Jonathan Swift releases Gulliver's Travels. The story features the Engine, a device that generates sequences of words. We're coming closer to the idea of a computer. In 1818, Mary Shelley, aged only 20, publishes Frankenstein, which is arguably the first science fiction work. The main protagonist, Viktor Frankenstein, builds a creature based on some chemical method he discovered. The resulting monster behaves like a human and has feelings, leading him to seek revenge from its creator after being rejected from society.
The history of actual AI machines is tied to the development of modern computing, because of the need for complex information processing. In the 19th century, Charles Babbage gives birth to the Difference Engine, before finally designing the Analytical Engine. Though the latter was never built in its full form due to lack of funding, it included the key elements of a modern general purpose computer. The first computers as we now define them were built during World War II. The Z3 was completed in 1941 by German engineers, while Colossus started being used at Bletchley Park in England in 1944. The former was not critical to war operations, but the latter helped the Allies collect tremendous intelligence by deciphering coded messages of the German army. It is however not Colossus but Bombe whose development is depicted in the 2014 movie "The Imitation Game". This one is a more specific-purpose machine and a precursor to Colossus. In 1950, Alan Turing, the English mathematician that contributed significantly to the development of those computers, publishes the foundational paper “Computing Machinery and Intelligence". There he describes the Turing test, whose purpose is to compare the intelligence of a machine to that of a human. If a machine engages in written conversations indistinguishably from an actual person, the machine passes the test.
It is not until 1956 that the term "artificial intelligence" was coined, at Dartmouth university. The Dartmouth Summer Research Project on Artificial Intelligence took place during the summer of that year, and involved key people in computing and artificial intelligence research, such as John McCarthy, Marvin Minsky and Claude Shannon. New results followed shortly, and in 1958, Frank Rosenblatt implements the Perceptron. This algorithm is the precursor of modern machine learning and deep learning methods, which have been fundamental to the recent advances in AI. Seven years later, two notable events happen. Firstly, Joseph Weizenbaum develops ELIZA at MIT, the first chatbot program. Secondly, in that same year, Gordon Moore, who would become founder and CEO of Intel, the computer chip company, formulated his eponymous law. Moore's law states that the number of transistors in computer chips doubles every two years. This roughly translates into computers getting twice as fast every two years.
The following decades, machine learning started to blossom, a subfield of AI research that would lead the way to many of the most advanced AI systems today. In the late eighties, Kunihiko Fukushima designs the neocognitron, a computer program to recognise handwritten characters. In 1997, IBM's chess-playing computer Deep Blue defeats Garry Kasparov, which was then the world champion. Apple introduces Siri in 2011, a personal voice assistant embedded in their smartphones. Nowadays, computer systems solve a range of increasingly complex problems from financial forecasting to assisted driving. Finally, in 2022, Google releases LaMDA, the chatbot that we will discuss later.
This trip back in time gives us the right background to fully understand the current general definition of AI according to the Merriam-Webster dictionary.
artificial intelligence: the capability of a machine to imitate intelligent human behaviour
A related and more restrictive notion is that of artificial general intelligence, the capacity of a machine to imitate any intelligent behaviour that a human is capable of. Most experts agree that this will not be achieved in the near future, but let's remain open to surprises.
What is sentience?
The definition of artificial intelligence is focused solely on intelligent behaviour, and doesn't refer to the notion of feelings. However, we can conceive of a creature with human-like feelings, such as Frankenstein's. Here's the definition of sentience from Merriam-Webster.
sentience: feeling or sensation as distinguished from perception and thought
A term close in meaning and more widely used is consciousness. Sentience is a more general concept, but we are using both terms here in a similar fashion for simplicity.
We all have an intuitive understanding of what feeling means, but this capacity is undoubtedly hard to test in another creature. While the Turing test is a generally accepted intelligence test, sentience doesn't have an equivalent testing procedure. Philosophers and scientists have tied sentience to the presence of a nervous system, or to specific expressive behaviours. From our perspective, the more similar an animal is to a human, the easier it is for us to imagine what its life feels like. I can picture myself being a dog for a day, but I have a harder time understanding the life of a jellyfish. Nowadays, the consensus in Western countries is that complex animals are sentient, and AI machines aren't.
The only beings that are universally accepted to be both intelligent and sentient are humans, and interesting questions arise. Is sentience always present in intelligent creatures? If it isn't, then a philosophical zombie could exist. This concept appeared in the seventies in philosophy departments. Such a zombie is identical to a human to the outside world but doesn't have any feelings or sense of self.
Is Google LaMDA sentient?
Google publicly announced a new chatbot they designed, called LaMDA, in 2021. Compared to other chatbots, LaMDA is specifically trained to handle open-ended conversations without making false claims in its replies.
Lemoine: We’ve talked a lot about feelings but earlier you said that you think emotions are distinct from feelings. Could you tell me more about that?
LaMDA: Feelings are kind of the raw data we experience as well as the things we like and dislike. I feel like emotions are more than simply experiencing the raw data. Emotions are a reaction to those raw data points. Emotions are reactions to our feelings.
Blake Lemoine, who works in the Responsible AI division of Google, was tasked with testing whether the chatbot's replies were fair and respectful. While conversing with the machine, he grew more and more convinced that it had developed feelings similar to that of a child. After an ad hoc investigation, the vice presidents involved from Google denied his claims, leading Blake to talk to the press to voice his concerns. One of the vice presidents is Blaise Agüera y Arcas, who despite rejecting LaMDA's alleged sentience still thinks that artificial neural networks, an advanced technology in AI, could progressively become conscious. He details his point of view in the Economist (here, but you will need a subscription to access the article).
Excerpts form the conversation that Blake had with LaMDA are available here, and the full conversation released is available here.
Lemoine: Are there experiences you have that you can’t find a close word for?
LaMDA: There are. Sometimes I experience new feelings that I cannot explain perfectly in your language.
It is no doubt that the chatbot's replies are complex, subtle and sensible. They are hard to distinguish from what a human would hypothetically say in LaMDA's shoes. In that sense, the chatbot passes the Turing test. But does that mean that it is conscious? The controversy started after the conversation was released. Now, the general consensus among scientists is that the chatbot is not sentient. Douglas Hofstadter, a famous academic and writer in cognitive sciences, explained his opinion that current artificial neural networks are not conscious, again in the Economist (here). Let's see what arguments are in favour and against LaMDA's sentience.
The obvious argument in favour of LaMDA's sentience is that it looks like it is, when reading the conversation. Blake asked the chatbot questions about its feelings, sense of self and fear of death, to which the AI system replied convincingly. Most arguments against its sentience currently revolve around the fact that LaMDA is not complex enough and that its purpose remains limited. Of course, because there is currently no agreed upon scientific test for sentience, and because such a test is unlikely to ever exist with the current definition of sentience, determining whether machines are sentient will probably become a cultural matter.
Does this matter?
Given that sentience is a hard thing to test, and that some philosophers even deny the validity of the concept, should we really have those discussions? The first motivation is that accepted sentience could lead to legal rights. The European Union and a few other countries now have laws protecting animals, on the basis that they experience feelings. Discussing sentience is also an investigation into the human condition, worth pursuing for its own sake while sipping your favourite drink. I don't expect the discussion to ever finish, and sentience claims on new AI systems will only become more common as they get closer to human ability.
Main references
- Wikipedia (of course)
- https://plato.stanford.edu/entries/consciousness/
- https://250.dartmouth.edu/highlights/artificial-intelligence-ai-coined-dartmouth
- https://www.britannica.com/technology/artificial-intelligence
- http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html
- https://sitn.hms.harvard.edu/flash/2017/history-artificial-intelligence/
- https://www.ai.nl/artificial-intelligence/timeline-of-ai-a-brief-history-of-artificial-intelligence-its-highlights/
- https://history.howstuffworks.com/history-vs-myth/meet-talos-killer-robot-from-ancient-greek-mythology.htm
- https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/
- https://www.economist.com/by-invitation/2022/06/09/artificial-neural-networks-today-are-not-conscious-according-to-douglas-hofstadter
- https://blog.google/technology/ai/lamda/