Accuracy, ethics must be at centre of AI race
In a world where unregulated tech is causing havoc and eroding democratic ideals, questions of ethics, accuracy and transparency will have to be front and centre in the months and years to come
First, there was ChatGPT. Now there is Bard. Alphabet’s surprise decision to release its version of a conversational chatbot has set up a new war in the Artificial Intelligence (AI) domain between two of the world’s biggest tech companies, Alphabet and Microsoft. The face-off builds on the popularity of ChatGPT, which can convincingly mimic human writing on a wide range of subjects and reliably pass a slew of tests. Built by San Francisco-based OpenAI, ChatGPT caught the imagination of millions because of its sensational ability to draft essays (including co-authoring an opinion piece in this newspaper), poems and fiction, solve complex problems or write code on demand; experts posited that this was a new turn in Web3 technologies and sparked fears that profess-ions, especially ones that are not highly skilled, may become obsolete. To be sure, while ChatGPT has north of 100 million users and Microsoft has announced plans to integrate its features with its Teams platform, little is known about Bard, though Alphabet chief executive officer Sundar Pichai said the app was to go out for testing with a plan to make it more widely available in the coming weeks.
All this is exciting in one sense. Generative AI has the potential to change the internet, but care also needs to be taken that such tools don’t become a substitute for knowledge, and that they complement skills rather than supplement them. And, most importantly, to ensure the factual accuracy of responses at a time when misinformation, bias, and hate speech have become serious concerns. In a world where unregulated tech is causing havoc and eroding democratic ideals, questions of ethics, accuracy and transparency will have to be front and centre in the months and years to come.