AI needs guardrails, and tight regulations - Hindustan Times
close_game
close_game

AI needs guardrails, and tight regulations

ByVivek Wadhwa and ChatGPT
Dec 23, 2022 07:56 AM IST

Stakeholders must work together to ensure that the benefits of AI are shared broadly and that policies are put in place to support workers who may be negatively impacted by these technologies

The most significant breakthrough of 2022 wasn’t nuclear fusion, which is still decades away from being a reality, but the advent of Artificially Intelligent (AI) chatbots. Former United States (US) treasury secretary Lawrence Summers even declared that one of these chatbots, ChatGPT, is a development on par with the printing press, electricity, and even the wheel and fire. While there is a lot to be excited about, the new technologies also have no guardrails, and my family has already seen their dark side.

There is a deep flaw with all machine learning technologies: They are designed to mimic the way the human brain’s neural networks function, but they do this in a limited and imperfect way. (Getty Images/iStockphoto)
There is a deep flaw with all machine learning technologies: They are designed to mimic the way the human brain’s neural networks function, but they do this in a limited and imperfect way. (Getty Images/iStockphoto)

ChatGPT is a chatbot developed by OpenAI that can generate text that is fluent, coherent, and relevant to a given context. It can provide personalised responses to common customer enquiries, generate reports and summaries based on large datasets, and help scientists and researchers by providing summaries of complex research papers and articles, as well as generating ideas for further investigation. However, ChatGPT can also be used for generating fake news articles or social media posts to spread misinformation or influence public opinion, and creating deepfake videos or audio recordings by synthesising realistic human voices or faces.

Now catch your favourite game on Crickit. Anytime Anywhere. Find out how

The problem is that the answers that ChatGPT provides are so realistic and seem so authoritative that they fool even the best technology experts and economists, such as Summers. As cognitive psychologist and AI researcher Gary Marcus noted in a blog post titled “AI’s Jurassic Park moment,” these systems can be fun to play with, but they are inherently unreliable, frequently making errors of both reasoning and fact, and prone to hallucination. As Marcus wrote, if you ask them to explain why crushed porcelain is good in breast milk, they may tell you that “porcelain can help balance the nutritional content of the milk, providing the infant with the nutrients they need to help grow and develop”.

The reliability and trustworthiness of ChatGPT and other similar technologies have been a source of concern for many AI researchers, including Marcus. The issue was significant enough that Meta AI, the company behind the chatbot Galactica, decided to withdraw the product three days after its release in mid-November due to concerns over the potential for political and scientific misinformation.

Even I didn’t take the warnings seriously until my son, Vineet, started using a version of OpenAI’s GPT technologies and asked it to tell him “interesting details about Vivek Wadhwa and his family”. The response seemed credible but had significant inaccuracies, the most glaring of which was that it stated that I am married to Ritu, who is an executive at Microsoft and a graduate of the University of California, Berkeley, and together we have three children: Anjali, Anupamam and Arjun. It also detailed where the children worked and their educational backgrounds.

I lost my dear wife Tavinder to cancer three years ago and both of my sons, Vineet and Tarun, are still as devastated as I am. I have no idea how this AI gathered this hurtful misinformation or how to correct it. I’ve never met someone called Ritu Wadhwa and can’t even find a Microsoft employee with this name on LinkedIn.

This is a deep flaw with all machine learning technologies: They are designed to mimic the way the human brain’s neural networks function, but they do this in a limited and imperfect way. Deep learning systems have millions or even billions of parameters, identifiable to their developers only in terms of their geography within a complex neural network. They are often referred to as a “black box,” meaning that the processes and reasoning behind their outputs are not transparent or easily understood. Once a neural network is trained, not even its designer knows exactly how it is doing what it does. This makes it difficult to reverse engineer or understand how the AI system learned what it did.

So when I re-ran the query that Vineet did, I got several different responses, including one that said that I am married to someone called Quatrina Hosain, who is an entrepreneur and technology executive and we have two children, a son and a daughter. She too is a mystery — and there is no way to determine where the AI got this misinformation from.

ChatGPT is still in development and the founders of OpenAI have acknowledged its weaknesses, which will surely be addressed over the next few years as the technologies continue to advance exponentially. But they will create even greater societal problems than misinformation — by decimating jobs such as in data entry, customer service, data analysis, and in manufacturing and transportation, such as assembly-line work and driving.

Note that more than 70% of this article was written by ChatGPT based on some notes and queries I gave it, so not even journalism jobs are safe.

This is the amazing and scary future we are rapidly headed into.

To ensure that AI is developed and used in a responsible and beneficial manner that is aligned with human values and ethical principles, we need strong guardrails and tight regulations. To address the concerns about the potential negative impact of AI on jobs, governments, businesses, and other stakeholders must work together to ensure that the benefits of AI are shared broadly and that policies are put in place to support workers who may be negatively impacted by these technologies.

Vivek Wadhwa is an academic, entrepreneur, and author and tweets at @wadhwa. ChatGPT, a large language model trained by OpenAI, is computer programme designed to assist with answering questions and providing information on a wide range of topics, and created to assist with tasks and provide information to users. The views expressed are personal

Get World Cup ready with Crick-it! From live scores to match stats, catch all the action here. Explore now!

See more

Get Current Updates on India News, Elections 2024, Lok sabha election 2024 voting live , Karnataka election 2024 live in Bengaluru , Election 2024 Date along with Latest News and Top Headlines from India and around the world.

SHARE THIS ARTICLE ON
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Monday, June 24, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On