close_game
close_game

The year ahead in tech: Keeping an eye on AI

ByBinayak Dasgupta
Jan 01, 2024 06:46 AM IST

It won’t be a robot war. The true dangers are less cinematic, but no less severe. It will be vital to legislate and monitor, without stifling innovation.

In tech-industry circles, the ebb and flow of the attention paid to artificial intelligence (AI) has been likened to the seasons.

How fast are AI-led image generators learning? Above (clockwise from top left) are results for the same cue – ‘Pope Francis in white Balen-ciaga puffer jacket’ – as rendered by Midjourney Version 1, V3, V6 (Alpha) and V5.1. (HT Imaging: Malay Karmakar)
How fast are AI-led image generators learning? Above (clockwise from top left) are results for the same cue – ‘Pope Francis in white Balen-ciaga puffer jacket’ – as rendered by Midjourney Version 1, V3, V6 (Alpha) and V5.1. (HT Imaging: Malay Karmakar)

Every now and then, a milestone product heralds an AI spring, setting in motion a flurry of reportage, analyses, conference sessions, start-ups, and funding. This first happened in 2018, when a company acquired by Google, DeepMind, invented AI models that outperformed humans in board games.

Then came a winter. The hype abated, as did the launches, and funding dried up. Until 2021, when AI image generators (such as Dall-e) were launched, giving people the ability to co-create digital art using text prompts. Then came the winter of 2022, which turned overnight into a glorious summer for AI, with the launch of ChatGPT.

Never before had people been able to converse with artificial intelligence. Now they could have it answer questions, write poetry, generate stories or summarise tomes of text. Image generators bloomed at this time, too, helping create realistic, synthetic images of the Pope as a DJ and of former US President Donald Trump fleeing police.

If 2022 was an AI summer, 2023 was hurricane season. AI’s potential to transform labour, creativity, entrepreneurship, social interactions and perhaps even political realities became clear, and lawmakers sat up. Can we find ways to tell real from make-believe? What happens when the answers generated in response to a question are not factual, but more akin to the “hallucinations” of a large language model?

By November, world leaders from 29 countries, including the US, UK, EU, India and China, met to draw up core principles to follow (and have industry adhere to), in order to make AI safer. In fact, the first alarm on AI risks was sounded by the industry, when in late March a group of tech executives and experts called for a pause on work on AI-led models. “Should we develop non-human minds that might eventually outnumber, outsmart, obsolete and replace us,” they asked. A second open letter months later, signed by chief executives of some of the world’s largest AI companies (including OpenAI, Alphabet and Anthropic), called on the world to recognise that “the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.

And thus, 2023 was marked by the recognition that evolutions in AI must be matched by evolving safety checks. But much needs to be unpacked to understand what the current generation of AI is and isn’t, and what it is one must safeguard against.

OpenAI’s GPT (or Generative Pre-Trained Transformer), for instance, is a foundational large language model that has learnt about the language we use and the literature (formal and informal) we have in the world today. It uses this information to make connections mirroring reality. Because of their potential for sweeping change in the world at large, such tools are also called “frontier” models.

Other, specific-use AI models power for-purpose tools of a very different, build-and-execute kind. These include OpenAI’s Codex, which can generate software code. DeepMind’s GraphCast, which aims to predict how the weather will change. And its AlphaFold, which can predict how proteins will change shape. These too, of course, hold great potential for change.

Understanding what they can and cannot do will be key to assessing the risks that such programs pose. Take, for instance, how they learn. Machine learning, simply put, deploys math. An AI model is fed data; it “learns” to make connections within these chunks of training data (called tokens). And when prompted, it can use this ability to make connections to answer a question or fulfil a request.

ChatGPT has no profound understanding of its own. Understanding this is crucial if we are to avoid the pitfall of anthropomorphising AI, which can in turn lead to ignoring real, structural problems (such as bad training data) and confusing or conflating real problems with existential risks (from a supposedly “conscious”, “sentient” machine).

Which brings us to five areas in the world of AI that do need attention in 2024.

First, how foundational models are self-regulated. The industry’s largest AI companies, including OpenAI and Alphabet, have collaborated under the Frontier Model Forum, to work out ways to self-regulate the industry. They are yet to draw up guidelines for how such large models can be ring-fenced and their development checked. The year should offer some answers to what is perhaps the most crucial question, however. At its simplest: When does a frontier model become “dangerous”?

Second, how they are taught. Today’s AI-led tools and products will need to be benchmarked and evaluated for real-world harms. Do they perpetuate bias? Can they be taught to prevent misuse? ChatGPT, for instance, can still be enticed to point users to illegal resources on the dark web. A recent Washington Post report showed that Dall-e and Stable Diffusion always generate images of young white people when asked to depict attractive individuals. Such issues can be persistent, insidious and can soon become deep-rooted.

Third, how governments regulate AI. The models that today’s leading AI companies have built involved massive computing resources that cost tens of millions of dollars. But it is now possible to build one’s own mini-versions, under the radar, by repurposing open-source models and using data available in the public domain. Computing resources remain expensive and hard to deploy without drawing attention (one would need a neighbourhood’s worth of power, for instance). But these restrictions are likely to ease. Which is why disclosure laws and scrutiny must be stepped up. The US government has recognised the need for such regulation, and it is likely that other countries will begin working on their versions of such legislation this year too.

Fourth, a whole-of-society approach to adapting to AI. On a yearly scale, such adaptation will likely be slow. New courses may be added to school and college curricula, as employers begin to look for proficiency among new hires. But the ability to tell AI-generated information from human creations will be especially crucial, and society — including the laws, media and industry — will need to think hard about how this challenge to our perceptions of reality can be dealt with. So far, measures have gone as far as talk of watermarks and counter-apps to detect AI-generated content. Reliability and low margins of error will be hard to achieve, and vital for that very reason.

Fifth, harnessing AI to boost economies. The past year has yielded research showing that AI can both improve business efficiency and worsen it. The task ahead, for individuals, businesses and policymakers, will be to chart a sustainable way forward. This may require legislation concerning labour and protections for vulnerable groups.

Balance will be key. Because disruption is coming. How hard it hits will depend on how well-prepared we are; which will depend on how we defined well-prepared.

.

ALSO IN THE YEAR AHEAD

Revolving doors

(AP)
(AP)

As debates begin to rage around the potential applications of AI, and the possible human costs of these, we are likely to see more turbulence within the companies building the artificially intelligent models. OpenAI, perhaps in a way it would not have chosen, served as an early example of what such a schism could look like. On November 17, the board of AI giant voted to sack its co-founder and CEO, Sam Altman (above). A frenzied four days later, Altman was back at his old job, and the board that booted him out had been dissolved.

To industry watchers, the turmoil exemplified the largely invisible but potent struggle between mission and money in the AI space; a struggle that has massive implications.

It helps to remember that OpenAI was founded, in 2015, as a non-profit organisation. In 2018, it turned part of its operations into a capped-profit entity. (Profits are currently limited to 100 times any investment it receives.) Some OpenAI board members — those that planned or backed the coup — had begun to have misgivings about the AI arms race that the company had triggered; an arms race that could yield tools that the world may not be adequately prepared for.

(It helps to remember, here, that mature democracies weren’t even adequately prepared for the fake news disseminated via Facebook and Twitter.) Altman, for his part, has been arguing that the world shouldn’t sacrifice audacity and the long-term perspective, at the altar of what he calls short-term concerns.

Eventually, it was investors such as Microsoft and Tiger Capital, and OpenAI’s employees, who were instrumental in having Altman reinstated. Raising an uncomfortable question: How are we to adequately prepare for the ways in which this technology might change our world, when we can’t even predict what or who will be driving the companies that shape it?

.

New and improved

.
.

An obvious answer to the question of how to prepare for AI is legislation. But can AI be regulated without stifling innovation? The EU, torchbearer of strict regulation for digital worlds, reached a broad political agreement on a new law for AI technologies in 2023. The final text will be unveiled this year, and could illustrate, if not an example of fine balance, at least the specific hurdles that obscure such a goal.

Meanwhile, in a bit of good news, the technology itself is evolving in ways that promise to solve real-world problems that have been difficult to crack.

In a demonstration by Google, its new AI model Gemini showed what multi-modal AI could look like. Prompts to this program can include text, photo, videos and audio (above). Gemini, for instance, can generate captions for images; or craft a poem using cues from both textual and visual prompts.

Meanwhile, GraphCast, a Google DeepMind AI model, is predicting global weather with greater speed and accuracy than it did before, a study published in Science found.

And, in a study published in Nature, researchers from MIT contended that AI helped them discover a new class of antibiotic drugs.

.

Augmented reality at work

.
.

Other dreams that look set to come true in 2024 include that of an immersive internet. This would be a world far different from today’s flat web pages and scrolling feeds. Despite its price tag of $3,499 (about 2.9 lakh), the Apple Vision Pro (above) is currently helping draw attention to the still-evolving augmented reality (AR) space. In good news for early adopters, it won’t be just games and entertainment in the immersive internet. Computing and workplace use-cases will increasingly find traction in augmented-reality worlds. Sightful’s screenless laptop, which consists of an AR headset and gesture-enabled computing, is one example. A 100-inch spreadsheet that only you can see? Work on!

.

Protection without passwords

A stolen smartphone is likely to be even less useful to its thief, in 2024. Apple’s iOS 17.3 update will add Stolen Device Protection to its bouquet, which promises to use location data to enable a second layer of protection that will prevent changes to an iPhone’s password and iCloud and passkey data, and will stop the user from returning the device to factory settings, unless the request is authenticated via Face ID.

Overall, we’re moving steadily towards a world in which passwords will be replaced by passkeys, which combine an encrypted code with some form of biometric authentication (typically fingerprint or facial recognition). Privacy-focused app developers such as Proton are strengthening security measures for apps such as Proton Pass, which is used as a password, passkey and two-factor authentication code manager. The plan is to offer the user more, within the same subscription. Within the year, expect offerings by such developers to more far beyond very-secure email and VPN apps.

.

Next steps in computing

.
.

This will be an important year for smart computing’s big three, Apple, Microsoft and Google, albeit for very different reasons. Apple must simplify its iPad line-up, which has become too complicated over time (with multiple variants, screen sizes and price overlaps). Parity with an easy-to-decode MacBook family will simplify choice for customers who remain confused by the question of “Laptop or tablet?”

While it must rely on Intel, AMD and Qualcomm to deliver performance upgrades with their next chips, Microsoft’s challenge is to make Windows more versatile across a wider range of devices. It still isn’t optimised for touchscreen-only tablets, or portable game consoles such as Asus ROG Ally. Perhaps Windows 12, due for release this year, will address some of these limitations. There is also much intrigue about the direction Microsoft will take with its surface-computing devices, which already have the traditional laptop and convertible forms, but aim to build further on concepts such as PixelSense (above), which one can think of as an interactive, computing coffee-table.

Google, meanwhile, has promised a range of updates for its education and workplace-focused Chrome OS laptops. Artificial intelligence figures extensively, with an image-generator tool, a virtual assistant to draft and refine short-form content such as posts and reviews, and AI-generated backgrounds for video calls.

- By Binayak Dasgupta and Vishal Mathur

Don't miss the Amazon...
See more
Don't miss the Amazon Great Indian Festival Sale 2024!
Enjoy incredible deals on laptops , TVs, washing machines, refrigerators, and more. Save big this Diwali on home appliances, furniture, gadgets, beauty, and more during the biggest sale of the year.
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
OPEN APP
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Sunday, October 13, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On