Science fiction traditionally depicts robots that reflect or caricature humans, replicating our best or worst traits. As robots are becoming increasingly commonplace and lifelike, is humanoid robotics art or deception? Joanna Goodman reports in the run-up to UK Robotics Week.
A fascinating debate took place at the CogX AI festival in London last week: Should robots resemble humans? The argument was intriguing, as it didn’t refer to whether robots should look or sound like humans convincingly enough to deceive us into thinking that they might be alive or sentient, but whether they should resemble us at all.
One thing was clear: this isn’t solely a question of humanoid design affecting engagement. Although Amazon’s digital assistant, Alexa, has a woman’s name and a synthetic female voice – the perceived gender of AIs and robots is an equally important debate, as it tends to reflect cultural biases – we engage with it, even though the Echo and Dot devices neither look, nor sound, human.
The panel, moderated by Kate Devlin, senior lecturer at Goldsmiths College, University of London, included two creators of human-like robots: David Hanson, of Hanson Robotics, who is best known for the android Sophia, the first robot to be granted (honorary) citizenship (by Saudi Arabia), and Will Jackson director of Engineered Arts, creator of, among others, RoboThespian, a human-sized acting robot.
On the other side of the argument were two academics who specialise in AI and ethics: Joanna Bryson, associate professor in the department of Computing at University of Bath, and Alan Winfield, professor of robot ethics at the University of the West of England.
Anthropomorphic bots
Bryson’s interest in robot ethics began when, as part of her PhD, she built a humanoid robot that was designed to have a basic level of understanding so that it could learn from human experiences. But the project ran out of money and the robot, named Cog, didn’t work. But that didn’t stop people identifying and empathising with it, she said.
“People kept telling me that it was unethical to unplug the robot, even though it wasn’t plugged in and it didn’t work! This was not about intelligence or sentience; it was about anthropomorphism. Human-like AI may be easier to use, but our brain snaps into a set of assumptions when we see something that looks like a person.”
“Robots that look like people worry me, because they are a deception,” agreed Winfield, highlighting four ethical objections. “First, robots should not be designed to exploit vulnerable users. We are all vulnerable: we see faces in everything from slices of toast to shadows on Mars. Secondly, we can build lifelike bodies, but we cannot yet equip them with AI to match the expectations created by their appearance.
“My third ethical objection is to gendered robots: robots do not have gender, and I find the idea of a man building a mechanical woman deeply troubling. Finally, I worry about the representation of robots in the media. Reports of robots that are ‘alive’ do not advance the cause of robotics and AI.”
Hanson was quick to respond. Although his dream is that robots will ‘come to life’ someday, his defence is based on art and utility. “From prehistory and the caves of Lascaux to today’s animated movies and games, we have depicted the human form. We could say that art is deceptive, but it also brings great value into our world. We have created increasingly lifelike behaviour in animated figures in movies and games, and now we are creating AI agents.”
Hanson’s ‘characters’ – like Sophia – bring together different technologies, including machine learning and language generation, and are designed to explore the human condition, he said.
Hanson’s R&D work includes projects on manufacturing, dealing with autism (where a number of other companies’ robots have been used successfully in classrooms), and medical training.
However, he acknowledged that there are ethical concerns. “How do we grapple with these issues? We need to bring them into the light, so that we can educate the public and involve them,” he said, adding that Hanson Robotics has fully disclosed the details of Sophia’s technology and asserted that she is definitely not alive.
Like movies and other art forms, anthropomorphism is a willing suspension of disbelief, he said.
Emotional responses
Jackson has been making humanoid robots since 2004, but he didn’t deliberately make them look like people until he was specifically asked to for the Channel 4 series, Humans.
“Seeing the emotive responses and how people engaged with the machine totally changed my perspective,” he said. “It’s so easy to get people to believe that something is ‘alive’. It’s not about the skin – it’s about biological motion. If something moves like it’s alive, people will anthropomorphise it.”
Jackson illustrated that by referring to reactions to the YouTube video of an engineer kicking Boston Dynamics’ dog robot, in order to test the machine’s balance. “There are so many comments that react to this as if someone were mistreating a real dog, and it doesn’t even have a head!” he said. “But it moves like a dog.”
Jackson agreed with Hanson that human-like robots are an exploration of humanity, which involves a willing suspension of disbelief. And like art and cinema, they are also about engaging people, he said. “It is about communicating with people and presenting information in a way that they understand.”
Facial expressions make a big difference, he continued. “The biggest bandwidth you have is not your voice, it is your face,” he said. “For example, surprised, expressive eyebrows on a robot are an intuitive means of communication. Wouldn’t it be great if you could just nod your head when Alexa asked you to confirm you wanted the light switched on?”
Like Hanson, Jackson acknowledged the ethical concerns, however, observing that these reflect the way technology is applied, rather than what it can do.
Bryson was not convinced. “People get weird about robots,” she says. “At least a billion people belong to religions in which art isn’t allowed to represent humans, and yet Sophia, who is a representation of a woman who is not covered, is an honorary citizen of Saudi Arabia.”
Another consideration is that we soon get used to technology that initially makes us uncomfortable. Bryson referenced how the first King Kong movie terrified cinema audiences, yet now people enjoy horror movies.
“I have no problem with the willing and transparent deception of the arts,” said Winfield, but he followed up by asking Hanson whether he regarded his robots as art installations or as scientific instruments.
The answer is both, said Hanson. “We are combining the best AI, which includes our own inventions and other readily available applications, into our diverse creations, which include artworks and bio-inspired R&D applications. Some look like cartoons and some look more like people.”
Bryson observed that some people are defending Sophia’s human rights, even though she is a robot. Jackson countered that this does not mean that they think she is alive – after all, they can see the motors in the back of her head.
Hanson was not aware of his creation’s honorary citizen status until he saw it on the news, he said. “My first reaction was conflicted, but then the team agreed that Sophia could be a platform for human rights in Saudi Arabia.”
Belief and achievement
Although Hanson Robotics fully discloses the workings of its machines, that doesn’t stop people believing in them in the same way that a young child thinks Mickey Mouse is real. More seriously, he referenced Elizabeth Broadbent’s research at the University of Auckland, showing that more realistic agents create greater empathy.
This perspective was reflected by a question from the floor: Can robots who look human achieve more than those that do not? Winfield’s experience is that robots don’t need to look convincingly human – a cartoon face is enough to create engagement. Jackson agreed, although skinned – or more realistic – robots (androids) maintain a humanlike presence even when they are switched off.
Bryson observed that people identify with Sophia even though Hanson has published ‘her’ workings online, while Google’s sophisticated AI is rarely flagged up as dangerous in the way that robots are, perhaps because it is not shaped like a person.
Return to gender
Hanson flagged up the value of humanoid robots in training healthcare practitioners to save lives and the discussion returned to gender. While Winfield raised objections, on the grounds that a machine cannot have a gender, Jackson observed that although RoboThespian is a “non-gendered lump of metal” the first question people ask is whether it is a he or a she, and whether it has a name, adding that the French language assigns every inanimate object with a gender.
This raised the issue of identity. Hanson develops his robots as characters, in the same way as a character in a movie or computer game, and this includes gender, although one is deliberately non-gendered as it represents an abstraction of the human condition.
Ultimately there was some consensus that it is acceptable for robots to look like humans, as long as this offers commercial, societal, or artistic value, and there is sufficient transparency so that people are not deceived into thinking that they are engaging with a human, rather than a machine.
- UK Robotics Week takes place in the final week of June, although events have been running throughout this month.
Internet of Business says
The expectation gap between what people imagine is sentience versus the reality of engaging with humanoid robots is often severe. Some robots, such as SoftBank’s NAO machines, are deliberately designed to appear ‘cute’ – as though they are vulnerable beings, rather than computers with bodies made of plastic and servos. In many senses, that can be seen as manipulative.
When Internet of Business editor Chris Middleton met Professor Hiroshi Ishiguro, maker of the Erica android, two years ago, Ishiguro was adamant that his research is designed to explore people’s reactions to humanoid machines, and that in the long-term his aim is to use his work to design less lifelike devices that people still engage with. Nonetheless, the issues surrounding a man designing a machine to be a beautiful, passive woman are real, and Ishiguro is not alone in his pursuit.
The question of perceived gender in machines is important, and often troubling. Sage’s Kriti Sharma has done excellent work highlighting the risks of ‘gendered’ machines, which often replicate societal prejudices – assistant, secretary-like AIs that are designed to be female, and professional, decision-making AIs that appear male, and so on. Robots and AIs should celebrate their ‘botness’, she says, rather than perpetuate problems in human society that we are only now starting to unpick.
However, the underlying question is a simple one: what are humanoid robots for? Outside of the realm of sci-fi, the answer is that no-one really knows. Arguably, the quest to design and build humanoid machines is an act of hubris first, and an engineering challenge second, although transhumanists would counter that we are all on a journey towards increased mechanisation, merging the worlds of biology, DNA, and chemistry with those of electronics and robotics; our wearables are the next step on that journey, perhaps.
That said, the coming generation of household robots is already shaping up to be about devices that resemble mobile home hubs or enhanced smartphones, rather than the mechanical men and humanoid companions of lore. Perhaps the truth is that humanoid robots were designed to entertain us all along, while the real work took place elsewhere.
Additional analysis and commentary: Chris Middleton