Demystifying the buzz around Generative AI - Hindustan Times
close_game
close_game

Demystifying the buzz around Generative AI

ByHindustan Times
Feb 13, 2024 12:36 PM IST

This article is authored by Pankaj Jha, consultant, investment banking and school student mentor.

There has been a great deal of discussion on generative Artificial Intelligence (AI) and its implications. Clearly, its flexibility and usefulness of providing a multitude of tasks done by humans is undeniable. However, there is much misinformation and dangers surrounding it also.

AI(REUTERS)
AI(REUTERS)

What is Generative AI? The term is made up of two things, AI and generative. AI is a term to say that a computer programme will do a job which a human will otherwise do. Generative is the fun bit in which we are creating new content, the computer has not necessarily seen it and is able to synthesise it and give us new things.

HT launches Crick-it, a one stop destination to catch Cricket, anytime, anywhere. Explore now!

Generative involves creating new content (audio, code, images, text, video).

AI involves automatically using a computer programme.

Let us look at text, an area called Natural Language Processing and we will see how the technology works and hopefully demystify some of the myth and problems. Generative AI is not a new concept. Google translator launched in 2006, so it has been around for 17 years. Another example is Siri on the phone. It launched in 2011. It was a sensation even back then. It is another example of Generative AI.

In 2023, OpenAI a company in San Francisco announced GPT-4. They claimed they can get top marks in many of the exams like SAT, law and medical. Besides exams, it a could do a host of things. You can ask it to write text for you or do a task for you.

This is quite sophisticated and rightfully created a sensation as it could do a host of things unlike the examples of Siri and Google translator which only did limited tasks.

ChatGPT and its variants are based on this principal that I have some context, I will predict what comes next. The task of language model (LM) is we have the context and we have a neural language model that will predict what is the most likelihood of continuation. These are all predicted on making actual guesses and what’s going to comes next. And that is why sometimes they fail because they predict the most likely answer whereas you want a less likely one.

So, you have this machinery that will do the learning for you and the task now is to predict the next word.

The question is how good a LM can become and how does it become great? Because when GPT came out in GPT-1 and GPT-2, they were not amazing. So, the bigger, the better. Size is all that matters, I am afraid. There was a time when people did not believe in scale and now, we see that scale is very important. In fact, since 2018, we have seen a very significant increase in model sizes. Are Large LMs always right or fair?

The simple answer is no. It is virtually impossible to regulate the content LLMs are exposed to during training. Because LLMs are trained on the web, they’ll always encode historical biases and may reproduce harmful content. Generative AI has rightfully created a buzz by its rapid growth in recent years. The point to emphasise is that they give the most likely answer which might not be correct in some cases. However, concerns arise regarding the generation of deepfakes, misinformation dissemination, and potential job displacement are omnipresent. Balancing innovation with ethical considerations, implementing safeguards against misuse, and addressing societal implications are critical for maximising the advantages of generative AI while mitigating potential harm.

This article is authored by Pankaj Jha, consultant, investment banking and school student mentor.

SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Wednesday, April 17, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On