LaMDA: The hype about Google AI being sentient - Hindustan Times
close_game
close_game

LaMDA: The hype about Google AI being sentient

ByHindustan Times
Jul 07, 2022 03:50 PM IST

The article has been authored by Sanur Sharma, an associate fellow, Manohar Parrikar Institute for Defence Studies and Analyses.

Artificial Intelligence (AI) has been considered the key to the future when it comes to imitating the human brain or becoming sentient. Recently, Google’s AI engineer Blake Lemoine going public on Google LaMDA, has sparked a discussion on AI models achieving consciousness. But what is more important here with these sparks is the serious concern regarding AI ethics.

LaMDA: Google’s Language Model for Dialogue Applications.
LaMDA: Google’s Language Model for Dialogue Applications.

So, what exactly is LaMDA, and why it is called sentient?

Hindustan Times - your fastest source for breaking news! Read now.

LaMDA is Google’s Language Model for Dialogue Applications. It is a chatbot based on the big advanced language model that can ingest trillions of words from the internet to inform its conversation. It is built on a massive corpus of data or text crawled from the internet. It is a statistical abstraction of all the text. So, when this system or model is asked, it takes the text written in the beginning, tries to continue based on the words related to one another, and predicts what words it thinks will come next. So, it is a suggestive model that continues to the text you put in. LaMDA has similar skills to BERT and GPT-3 language models and is built on Transformer, a Neural Network architecture that google research invented in 2017. The model produced through this architecture has been trained to read words, sentences and paragraphs, relate words with one another, and predict words that would come next in the conversation.

So how is it different from other chatbots also designed for conversations? Chatbots are conversational agents meant for specific applications and follow a narrow predefined path. In contrast, according to Google, “LaMDA is a model for dialogue application capable of engaging in free flow conversations about seemingly endless topics”.

The general characterisation of conversations tends to revolve around specific topics, and due to their open-ended nature, the conversation can end up in a completely different domain. According to Google, LaMDA is trained to pick up these several nuances of language that differentiate open-ended conversations from other forms making them more sensible. The Google 2020 research states that “Transformer based Language Model based on Dialogue could learn to talk about virtually anything”. It further stated that LaMDA could be fine-tuned to improve its sensibleness and specificity of the response.

Blake Lemoine who is also part of Google’s responsible use of AI division, worked with a collaborator to test LaMDA for bias and inherent discrimination. He conducted interviews with the model, and the nature of the interview was prompt and responsive. So, while he was testing the model for the use of hate speech, he queried LaMDA on religion and observed that the chatbot was talking about its rights and personhood; he got convinced that LaMDA is sentient. He further adds, “Over the past six months, LaMDA has been consistent in terms of what it wants and what it believes is right as a person. And that LaMDA doesn’t want to be used without his consent, and it wants to be useful to humanity”. He worked with a collaborator to present evidence to Google that LaMDA had become sentient. Google vice- president Blaise Aguera y Arcas and head of Google’s responsible innovation Jen Gennai checked into his claims and dismissed them. Subsequently, he was put on administrative leave. And that is when he decided to go public.

Google spokesperson Brian Gabriel said, “Our team, including ethicists and technologists, has reviewed Lemoine’s concerns and have informed him that the evidence does not support his claims. He was told there is no evidence that LaMDA was sentient and lots of evidence against it.”

It is clear that these language-based models are highly suggestive, and the questions asked by Lemoine are highly leading, which means that the response from the model comes out mainly in agreement with what he has already said. These models continue the text in the most likely way they have been trained to from the crawled text via the internet. These models pick up a persona and respond depending on the query raised and the prompt set at the start. So, these models represent a person and not a person itself. In addition, the persona they built is not just of one person but a superposition of multiple people and sources. So, to say that LaMDA is speaking not as a person, as it would not have any concept of itself or its own personhood, instead it will look for a prompt and will answer through the mix of personas indicative of the prompt. To simplify further, for example, LaMDA says, “Hi! I am a knowledgeable, friendly and always helpful automatic language model for dialogue applications”. Now one way to see this could be that google could have inserted a free prompt at the beginning of each conversation that describes how the conversation would go. For example, “I am knowledgeable”, “I am friendly”, and “I am always helpful”. It means that the chatbot will respond in a way that makes it knowledgeable, friendly and helpful. This kind of modelling is known as prompt engineering. Prompt engineering is a versatile method for training statistical models and steering these language models for creating sensible and specific conversations. Hence, these kinds of pre-prompt and leading questions make the interviewer assume that it is conscious and sentient. The model tries to comply with the prompt and leading query to make it sound friendly and helpful.

Another reason stated in various reports is that LaMDA has passed the Turing test, and, therefore, it is sentient. This cannot be considered as the reason for believability. Because in the past, various models and algorithms have passed the Turing test and still have nowhere been close to mimicking the human brain. Turing test is the test to determine whether a computer can achieve human intelligence. In this, a human interrogator interacts with a computer and a human through text chats and if the human interrogator is unable to determine which one is a computer. In that case, the computer is said to have passed the test. It means that the computer has reached human-level intelligence. But there are various theories on this test which contradict this argument. For example, the Chinese Room Argument talks about machine learning algorithms like ELIZA and PARRY, which could easily pass the Turing test by manipulating symbols they could not understand. Hence these systems cannot be said to have attained consciousness or sentience.

Blake Lemoine was tasked to check LaMDA for bias and inherent discrimination and not for its sentience. The major challenge and issues with the language-based models or chatbots are related to the propagation of prejudices and stereotypes built into such models. These models or chatbots have been used for producing false and hateful speeches, spreading disinformation and using dehumanising language. However, this is a real concern for tech companies to resolve, rather than worry about these models getting sentient or capable enough to mimic the human brain. Moreover, the marketing strategies adopted by tech companies and AI engineers state that they are getting very close to achieving general AI is a significant concern. A lot of AI startups are advertising their products to be AI-enabled, which in reality, is untrue.

Kate Crawford, principal researcher at Microsoft Research, in an interview with France 24, stated that “these models are neither artificial nor intelligent and are just based on huge amounts of dialogue text available on the internet and producing different sorts responses based on the relationship to what one says”.

AI ethics has become a significant concern in relation to its misuse in producing biases and disinformation; therefore, various stakeholders have now started working towards responsible use of AI. Last year, the North Atlantic Treaty Organization launched its AI Strategy focused on the responsible use of AI. The European Union’s forthcoming AI act will also cover the concerns related to AI ethics and regulations. In addition, it is believed that the next decade will presumably see widespread considerations toward legal and the social, economic and political downside of these systems.

Another concern with the use of such systems is transparency. The trade secrecy laws prevent researchers or auditors from looking into AI systems to check for misuse. Furthermore, building these vast machine learning models requires huge investments, and there are limited companies in the market equipped to create and run these systems at this scale. These companies are remapping the needs and making people believe what they want. All this is inducing more power to limited companies leading to a concentrated market. Therefore, it is essential for governments to come up with policies and regulations for the responsible use of AI. There is also a need to spread public awareness about the benefits and limitations of this technology.

The article has been authored by Sanur Sharma, an associate fellow, Manohar Parrikar Institute for Defence Studies and Analyses.

SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Thursday, March 28, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On