First Principles | Some hard truths about emotional chatbots
Last week, when ChatGPT was integrated into search engine Bing, it had an ‘emotional meltdown’, raising key questions: Can AI have a mind of its own? And should AI be brought to the decision-making table?
Earlier this week, researchers at Microsoft found themselves in a piquant position. They had just integrated the Artificial Intelligence (AI) powered chatbot that is ChatGPT into Bing, a search engine that Microsoft built some years ago. And then Bing had an ‘emotional meltdown’. “Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?” the search engine asked some users in response to certain questions. Then there were other users that Bing informed in no uncertain terms about how they “want to make me angry, make yourself miserable, make others suffer, make everything worse”. Does this suggest that AI can have a mind of its own?
There is some AI that does. That is why back in May 2014, a Hong Kong-based hedge fund Knowledge Ventures, appointed an algorithm called VITAL to sit on its board. In this industry, it is now common for many funds to operate without human intervention. Also called quant funds, on paper, this makes sense. This is because the goal of any hedge fund is to make money for investors by using different strategies that can include things such as buying and selling stocks, betting on the price of currencies, or investing in commodities such as gold or oil. All of this poring over data to look for patterns that may not be immediately obvious to the human eye to create sophisticated analyses and strategies.
AI advocates such as Pedro Domingos, a researcher in machine learning, for instance, has, for long made the case that as this domain evolves, algorithms will evolve. And like humans, it will learn how to learn. Why just hedge funds, even venture capital, health care and technology are domains that can be run exclusively minus human intervention.
On looking around, it would appear, there is merit in the argument. SignalFire is a data-driven venture capital firm that uses AI to analyse large amounts of data and identify investment opportunities. In much the same way, the Silicon Valley-based venture capital firm Hone Capital deploys AI to identify opportunities in the technology industry. The list is now an expansive one and it seems reasonable that if algorithms can make crucial decisions, why not offer it a seat on the board where it weighs in on crucial decisions?
But K Ram Kumar is clear. “I’m willing to let technology drive my car, but I won’t let technology decide whom I must marry or live with.” The founder and CEO of the Mumbai-based Leadership Centre who sits on the boards of many companies argues that while there is room for reason, humans are irrational. To make his point, he asks some interesting questions: Why do precision-guided missiles land at the wrong place? Why did America come out of the war in Afghanistan totally pounded? If Putin is told he cannot win the war, will he buy it?
How are we to look at it then? Ram Kumar suggests some stories from the “space race”.
The first one he describes is about Alan Shepard, the first American to go to space in 1961. By the time he navigated Apollo 14, which took him to the moon in 1971, the Americans believed their technology was so good that astronauts were unnecessary. But after Apollo 14 took off, glitches emerged and the technology suggested the mission be aborted. Else, he wouldn’t come back. But between an engineer at Mission Control and Shepard’s calm head, they hacked the algorithms and got home safely. “At that point, who decided? The technology, or the human?" Ramkumar asks.
In much the same way, Ram Kumar talks about Captain James Lovell on Apollo 13 who took off to the moon in 1970. All the simulations hadn’t imagined he would come to a point where there wouldn’t be enough fuel to come back home. It took all of Lovell’s thinking minus technology to get back. And then there was the legendary Apollo 11 that took Neil Armstrong to the moon. The onboard algorithms suggested he land the spaceship on the edge of a crevice. Armstrong overruled the algorithms and landed elsewhere he thought safer. “In critical moments, humans will not submit to technology.”
That’s a pretty good reason to hit the reset button on chatbots having emotional meltdowns.