Sulking won’t help. AIm higher, says Charles Assisi
It’s baffling that educational institutes and workplaces are trying to ban AI language programs. They’re here to stay. That’s good news.
First up, a point of disclosure: This column has been co-authored with GPT-3, a language-generation artificial intelligence (AI) model. Until a few weeks ago, I was morally conflicted about using it like this; I‘m not any longer. If anything, I’m happy to pay $20 a month to creator Sam Altman and his team at OpenAI as licence fees.
As a reader, I’m reasonably sure you won’t be able to tell, either, that AI was deployed to help write this piece.
Meanwhile, I got to finish my column, which would otherwise have taken me pretty much all day, in about 60 minutes. Given the time this opens up for me, to do much else, I am starting to see the money spent here as a wise investment.
I’m also trying to prove a point (once and for all): Worried academic institutions and workplaces have started to announce bans on this tool. They are concerned that people are starting to use the GPT and other such applications to do part of their work. Such bans won’t help. It’s going to become harder and harder to tell if an AI tool has been involved in a piece of human output.
More vitally, such bans stand to slow people down and impair creativity, when what we should be doing is embracing a whole new world of possibility.
Bans are an antiquated way of viewing this kind of collaboration. The truth is that I can only co-author this with GPT-3 because I’ve spent the last few weeks tinkering with the tools powered by this software, attempting to write articles that might make the cut for a newspaper. It takes time to learn the right kinds of prompts to use. It takes clear and specific prompts to extract the output I need. Without those, the outcomes can be unpredictable, even nonsensical. It takes creativity, effort and persistence to collaborate with an artificially intelligent software program.
Get it right, though, and GPT-3 is amazing. This Large Language Model (LLM) can write stories and poems, generate reports, craft fictional conversations with historical characters, in their voice. It’s far from perfect, but it’s a lot of fun, and it is an incredible leap for humankind.
To attempt to slam the door on something that is so clearly going to shape the future of work is, to me, a baffling approach. Think of software programmers today. They often use pre-written blocks of code, called libraries, to speed up the development process. Instead of starting from scratch each time, they can use these blocks that have already been tested and debugged, in order to help perform a task or meet a goal. This allows them to focus on the unique parts of the project that really do need their time, effort and creativity. It also takes expertise and craft to decide which blocks of code to use and how. Would we consider asking them to start from a blank screen each time?
How different are writers? We too reuse pre-existing resources (words) from libraries (vocabularies). GPT-3, then, can be seen as a platform that offers access to ready blocks of text that a writer can then use to generate content.
These blocks must be reworked, reimagined and tweaked to meet the brief. That takes expertise and effort. So I don’t see the ethical conundrum. To me, it’s the equivalent of, say, a calculator even. Do accountants, mathematicians and researchers tussle over whether to use these? Of course not. None of us thinks twice. It’s a tool, not a crutch. To go any further than the basics, there too, one must start to think the problem through oneself.
GPT-3 and the other AI-driven language programs are just that: tools to augment one’s abilities. They represent the power to collaborate, taken to entirely new levels. We are limited only by our imaginations now. Our imaginations and our biases. Let’s at least eliminate the latter.
(The writer is co-founder at Founding Fuel & co-author of The Aadhaar Effect)