close_game
close_game

Seeing Silicon | Do cyborgs think of themselves as slaves?

Aug 11, 2024 08:00 AM IST

Is our quest for intelligent, human-like, and AI-powered machines, are we cruel to the robots we create?

A month ago in South Korea’s Gumi city, an administrative officer robot is said to have leapt from a six-and-a-half-foot flight of stairs to end its life. After an investigation, the city council announced that the self-destructing act was an act of suicide. Earlier in the day, a human official had noticed the robot “circling in one spot, as if something was there” before it deliberately plunged itself down the stairs and shattered to pieces.

Honda's humanoid robot, ASIMO. (Picture shown is for illustrative purposes)
Honda's humanoid robot, ASIMO. (Picture shown is for illustrative purposes)

Called Robot Supervisor, the first cyborg of its kind had been an administrative officer for the city council since August 2023 and was tasked with promoting the city to its residents and delivering documents. It worked from 9am to 6pm and had a civil service officer card. Normally, it used the elevator to go to a different floor. So, what was it doing on the stairs? And what made the cyborg jump off the stairs and smash itself to smithereens?

Unlike in the universe imagined by author Isaac Asimov, we do not have someone like Dr Susan Calvin in our world, who specialises in robopsychology. In her absence, the Korean government and the California-based startup Bear Robotics that created it are still trying to decode the now-defunct machine’s mental health status.

Meanwhile, the incident made me recall all the various ways in which humans are cruel to machines and the robots they create. After all, we humans do think of robots as modern slaves.

Decades before Karel Čapek created the term ‘robot’, the West already dreamed of mechanical workers that could work for them for free. A man-shaped ‘steam carriage’ that could do mundane tasks. Intelligent machines are created to take over the mundane tasks of our lives and existence. Machines that can cook, clean, educate human children, perform dangerous tasks like mining, rescue and war; machines that can write our emails, take care of our infirm, do surgeries and help us live a longer, healthier, freer life. All without complaining, without expectations, without respite, or a demand for rights. Servants serving with pleasure.

As early as 2009, Dr Joanna J Bryson, a professor of ethics and technology at The Hertie School of Governance in Berlin, wrote a much-quoted paper ‘Robots Should be Slaves’ where she argued that robots should not be described legally as persons, and shouldn’t be given any legal or moral responsibility for their actions. They are like property, owned by humans who determine their goals and behaviour – either through coding or instructions.

This, according to her, was the best way to incorporate robots in our society. “Robots should be built, marketed and considered legally as slaves, not companion peers,” she wrote in the essay defining robots as anything from physical cyborgs to digital agents and assistants that can empathetically talk to a human. “Robots should be servants and not people you own,” she wrote, distinguishing the cyborgs from the human slaves who have been exploited for much of our history.

There’s even a term for software used in submissive machines helping a human: It’s called a ‘master-slave robotic system’ meant to keep human control of medical surgery robots.

Our science fiction imaginations constantly play out the moral and ethical aspects of owning mechanical slaves. In Philip Dick’s Do Androids Dream of Electric Sheep, enslaved human-like androids, revolt against humans as they wish to be treated equally. In Westworld, an HBO show, lifelike robots rebel against humans because they are being casually killed off in a Disneyland version of Wild Wild West.

Love in the Age of Mechanical Reproduction by Judd Trichter explores a love story between a human and an android who is not a person but a property. While Madeline Ashby in her book The First Machine Dynasty explores robots who are enslaved subtly: Through their desire to please humans. Their coded desire to keep humans happy – something the author calls a ‘failsafe’ –makes it easier for humans to abuse them, knowingly and unknowingly. In The Murderbot Diaries, one of the most endearing narrators in recent times, a cyborg who calls itself Murderbot, finds freedom from the system that controls it and becomes uncomfortable as it develops emotions and doesn’t know what to do with it.

With the developments in AI, startups are creating empathetic virtual assistants and chatbots, with their own personality – artificial systems capable of forming a bond with humans. It’s only a matter of time before we’re faced with the moral and ethical dilemmas of having complete power over intelligent systems.

However, one question that remains, both in science fiction’s imagined worlds and our reality, is why we need such intelligence in our machines. For some tasks, yes, we do need intelligence. Like companionship robots need to be emotionally intelligent. But most robots will work as clerks, a vacuum cleaner or a washing machine. So why make them human-like? Why build them with super-intelligent AI-run systems that can think and be emotional? Why give them the intelligence of a scientist but make them do mundane, repetitive tasks? Isn’t giving low-level tasks to an intelligent system cruelty?

There is a growing consensus among robot-makers on keeping sentient machines at a certain level of intelligence to ensure they’re ‘satisfied’. Leila Takayama, vice president of Human-Robot Interaction and Design at Robust.AI, thinks robots don’t need to imitate us for us to be able to use them. They can imitate our tools instead. Her company makes Carter, a warehouse robot which is shaped like a cart.

Moxi, a robot made by Diligent Robotics to do hospital chores in the USA, is built like a human as hospital rooms and hallways are built for humans, with a body that has receptacles. Moxi doesn’t need eyes but has eyes so people can tell what it’s about to do and comes with a human-like voice to make it easy for hospital staff to communicate with it. However, the company has deliberately not made the robot too intelligent (as it might make people worry about how long their jobs will last). It’s a conscious decision that Andrea Thomaz, Diligent’s CEO, has adopted in its design.

Both Takayama and Thomaz have approached robots as tools that can help us in our tasks. As more companies get into robotics, perhaps it’s an important design question to ask. What will the robot do? Who is it built to serve? What ideas about people will this machine reinforce, and how can we, the humans, make sure it doesn’t suffer?

The South Korean robot supervisor’s so-called death by suicide is a warning bell for humans to not create robots as artificial humans but as a set of tools. Perhaps this mindset will make sure we’re not cruel to the machines we’re creating. Or perhaps, we do need some professional robopsychologists in our world like Dr Susan Calvin.

Shweta Taneja is an author and journalist based in the Bay Area. Her fortnightly column will reflect on how emerging tech and science are reshaping society i Silicon Valley and beyond. Find her online with @shwetawrites. The views expressed are personal.

See more
SHARE THIS ARTICLE ON
Share this article
SHARE
Story Saved
Live Score
Saved Articles
Following
My Reads
Sign out
New Delhi 0C
Monday, September 16, 2024
Start 14 Days Free Trial Subscribe Now
Follow Us On