The curse of Artificial Intelligence
AI is not presently suited to making judgement calls. Until we create systems that do an excellent job of excising the bias and retaining the useful historical data about us and our lives, AI left unchecked is more of a risk than a benefit to society
The artificial intelligence company, OpenAI, recently demonstrated a neural network that created news articles so convincing and so capable of emulating human journalists that the organisation’s fear of its misuse to sway elections and terrorise stock markets led it to restrict distribution of the code underlying it. At the same time, stories about robots stealing our jobs and driverless cars killing humans stoke our fears that an artificial intelligence (AI) apocalypse is nigh.
Whilst these fears have a sound basis, the immediate risk to society from AI comes not from robots or amazingly intelligent machines, but from the inexorable creep of artificial intelligence into our lives. In codifying in it the very processes that govern society, business, health, and other key parts of life, we are normalising many of humanity’s worst traits.
Even the most enlightened of humans have deep-seated biases. Difficult to identify, they are even harder to correct. And today’s AI learns by encoding patterns in the data that it feeds on. If you build an AI system designed to identify who is going to be a future felon, for example, the only data you can rely on are past data. With a disproportionate share of convicted felons belonging to ethnic minorities, the model that a naïve AI system builds will readily infer from membership in such a minority likelihood for future felony. Such judgement will be unable to take into account all of the biases that have ensured minorities’ higher incarceration rates. And we have no data to train AI systems on other than data that, though superficially objective, inherently express societal norms and biases.
In this way, the current generation of artificial intelligence is smart like a savant but has nothing close to the intelligence of a human.
AI shines in performing tasks that match patterns in order to obtain objective outcomes. Playing Go, driving a car on a street, and identifying a cancer lesion in a mammogram are excellent examples of a narrow, brittle AI. These systems will be incredibly helpful extensions of how humans work and will surpass humans in discrete parts of jobs, but will never supplant a human. A red light is a red light, and an oncoming car is an oncoming car. And a tumour is a tumour, regardless of whether it is in the body of an Asian or a Caucasian. Able to base their judgements on objectively measurable data, these systems may be readily correctible if and when interpretations of those data are subject to overhaul. But, although an AI may best a human radiologist in spotting a cancer, it will not, for many years to come, replicate the wisdom and perspective of the best human radiologists.
Where AI does present risks is in softer tasks that may have objective outcomes but incorporate something we would normally call judgement. Some such tasks exercise much influence over people’s lives. Granting a mortgage, admitting a child to a university or a school, awarding a felon parole, or deciding whether children should be separated from their birth parents due to suspicions of abuse fall into this category. Such judgements are highly susceptible to human biases, and the full extent of such biases has been made apparent in recent lawsuits against Facebook for routing job ads to specific genders and races. These tasks inherently entail assessing how other humans will behave and judging either what they deserve or how they will perform.
Another judgement failure of AI lies in analysis of content and its effect. YouTube (and, earlier, Facebook) created algorithms to boost user engagement that identified the stickiest, most engaging content and promoted it. Unfortunately, these algorithms had no circuit breaker, and any assessment of whether the toxic content promoted by these algorithms was good for society and good for the consumers was a late afterthought.
Growing awareness of these risks, however, has not slowed the rapid ingestion of algorithmic decision-making into the fabric of society. We are now living in a time of Amazon, Google, and Facebook, in which the tech giants deploy algorithms and AI to influence what we see, hear, and buy — and how we feel. And the devolution of the burden of consideration from people to machines is rapidly spreading into many other nooks and crannies — in the guise of magical optimisation.
Amazon assumed that by pointing a neural network and powerful AI systems at résumés, it could identify perfect employees. Naturally this resulted in an AI system that discriminated heavily against women in identifying good candidates for technical roles. The AI was relying on the data society had given it, without factoring in the biases and problems underlying that data.
When it hits the news, as it did with Amazon, then we can press pause and reconsider. But the majority of such instances of baked-in algorithmic and AI bias are hidden: submerged and woven into the technology fabric that we wrap around business and society under the banner of efficiency and optimisation.
The reality is that, at the moment, AI is not suited to making judgement calls. Perhaps in the future we can create systems that do an excellent job of excising the bias and retaining the useful historical data about us and our lives. Until that time, AI left unchecked to rule our lives is more of a risk than a benefit to society.
Vivek Wadhwa is a distinguished fellow at Carnegie Mellon University at Silicon Valley and author of The Driver in the Driverless Car: How Our Technology Choices Will Create the Future. The article has been co-authored by Alex Salkever, a technology executive
The views expressed are personal