Vulnerable people easily manipulated by humanoid robots, find new studies

SoftBank Robotics' NAO machine

Malek Murison and Chris Middleton report on a brace of new humanoid robotics studies that reveal just how easily human beings can be influenced by cleverly designed machines.

A series of infamous experiments in the 1960s by social psychologist Stanley Milgram suggested that the majority of people are obedient to authority figures – sometimes to an extreme.

His experiments apparently showed that all it took to coerce one person into harming another was a man in a lab coat issuing instructions: the ‘agentic state’ theory, in which human beings subsume their personal responsibility and consciences in the will of an authority figure.

In recent years Milgram’s experiments have been discredited to a degree, but they remain a fascinating study of how authority figures can either push a majority of participants into causing harm to others, or make people feel obligated to please them, depending on how one interprets the results.

But what if that same type of staged process was applied to exploring human-robot interactions? Would human beings harm an emotional robot? Or is the harm, in reality, of a different and more serious nature?

How we treat robots that have social skills

That was the question that researchers from the University of Duisburg-Essen wanted to answer.

To test empathy between humanoids and humans – and the extent to which a robot’s social skills determined how interactions would play out – they recruited 89 volunteers to sit down, separately, with a NAO machine, the toddler-sized humanoid from SoftBank Robotics.

The interactions were split into two distinct styles: social, in which the robot mimicked emotional human behaviour with some participants, and purely functional, in which it acted more like a simple machine with others.

The study, published in the journal PLOS One, explains how participants thought they were taking part in a learning exercise to test and improve the robot’s abilities. But the real purpose of the experiment centred on how the interactions – whether social or functional – ended: once the exercises had finished, scientists asked the participants to switch the robot off.

In around half of these staged interactions, the robot was programmed to object, regardless of whether it had previously behaved in an emotional or functional style. On top of pleading – with empathy-triggering statements like “I’m afraid of the dark” – it would beg, “No! Please do not switch me off!”

Out of the 89 volunteers, 43 were faced with these objections from the NAO machine. Hearing the robot plead not to be switched off, 13 refused point blank to do so, while on average, the remaining 30 took twice as long to comply with the researchers’ instructions than those who didn’t experience the pleas for mercy.

There are further observations to be taken from the study. For example, volunteers faced with a robot apparently begging for its life following a purely functional interaction hesitated the longest out of all the participants. Intriguingly, it seems, the sociable robot was easier to switch off, even when it objected.

Though unexpected, this result indicates the role of dissonance in human reactions: when a monotonous, machine-like interaction suddenly gains (apparent) sentience and/or the robot speaks in emotional terms, we take more notice.

Children easily influenced by robots

Another research study, carried out at the University of Plymouth in the UK, found that young children are significantly more likely than adults to have their actions and opinions influenced by robots.

The research compared how adults and children respond to an identical task when in the presence of both their peers and humanoid machines. It showed that while adults regularly have their opinions influenced by peers, they are largely able to resist being persuaded by robots – a finding contradicted by the German results, perhaps.

However, children aged between seven and nine were more likely to give the same responses as the robots, even if these were obviously incorrect.

Writing on the university’s website, the university’s Alan Williams explains how the study used the Asch paradigm, first developed in the 1950s, which asks people to look at a screen showing four lines and say which two match in length. When alone, people almost never make a mistake, but when doing the experiment with others, they tend to follow what others are saying (Milgram’s experiment rears its head once again).

When children were alone in the room in this research, they scored 87 percent on the test, but when the robots joined in, the children’s score dropped to 75 percent. Of the wrong answers, nearly three-quarters (74 percent) matched those of the robot.

Like the emotional robot study, the Plymouth research reveals concerns about the potential for robots to have a negative or manipulative influence on people – in this case, on vulnerable young children.

The research was led by Anna Vollmer, a postdoctoral researcher at the University of Bielefeld, and professor in Robotics Tony Belpaeme, from the University of Plymouth and Ghent University.

Professor Belpaeme said, “It shows that children can perhaps have more of an affinity with robots than adults, which does pose the question: what if robots were to suggest, for example, what products to buy, or what to think?”

The Plymouth study concludes: “A future in which autonomous social robots are used as aids for education professionals or child therapists is not distant.

“In these applications, the robot is in a position in which the information provided can significantly affect the individuals they interact with.

“A discussion is required about whether protective measures, such as a regulatory framework, should be in place that minimise the risk to children during social child-robot interaction, and what form they might take, so as not to adversely affect the promising development of the field.”

Research with one eye on the future

Studies like these confirm the findings of previous research in this space: humans are inclined to treat robots and other devices as living beings, particularly if they are able to express – or rather, mimic – sentience in some way.

And that’s significant because, moving forward, how we treat robots, and how they behave with us, will become increasingly important.

As they become more lifelike and ingrained in society in either software or hardware form, robots need to be designed in a way that makes them affable, predictable, and easy to cooperate with.

But the research findings indicate that machines can easily be programmed with behaviours that are highly manipulative in terms of human responses.

This suggest that we may need one of two things in the medium term: either a middle ground where robots are designed to be clearly distinct from people in terms of how they handle interactions – in order to avoid confusion; or acceptance from humans that, despite their apparent sentience, humanoid machines do not deserve our empathy.

In short, vulnerable people may need to be protected from manipulative machines, rather than the other way around. At least until true artificial intelligence – sentient, self-aware machines – emerge years in the future, at which point we may enter a very different age of robot rights.

There are also fears that designing lifelike robots for the sole purpose of objectification – such as those developed for sexual gratification – could normalise predatory and abusive behaviour.

Internet of Business says

The NAO (pronounced ‘Now’) robot – like its larger ’emotion sensing’ cousin, Pepper – presents a fascinating anomaly in humanoid robot development. Now commercially available from SoftBank, NAOs were originally designed by France’s Aldebaran Robotics as research platforms for universities and robotics labs.

Aldebaran – acquired by SoftBank five years ago – set out with the goal of creating robots that could be ‘friends’ with humans, rather than presenting a clear, practical application of humanoid robotics.

The NAO machines are small, almost childlike, amusing, speak with light, friendly voices, and are programmed with a range of expressive behaviours. They also sing, dance, and tell stories. As a result, they’re popular in education, including in specialist areas, such as teaching children who are on the autism spectrum.

However, despite their fun design, entertaining behaviour, and sophisticated engineering, they are simply computers: an Intel Atom processor, to be exact, combined with a secondary ARM 9 chip, along with a collection of servos, sensors, microphones, and cameras, all packaged in a tough plastic casing with a cartoon-like face. Everything else is software programmed by human beings.

NAO machines have no AI as most people would recognise it, and merely perform pre-programmed routines, which can either be downloaded from the SoftBank community, or created by owners using the Choreographe application, developed by Aldebaran in 2008.

However, a recent tie-up between SoftBank and IBM means that NAO and Pepper machines can run as front ends to Watson in the cloud, which has opened up broader applications for the robots in some sectors, such as leisure and retail, when linked with industry-specific data sets.

Nevertheless, NAO machines’ much-publicised autonomy is largely limited to a mode in which they can explore their environment and cycle through other pre-programmed functions randomly.

In short, NAO robots have zero sentience or awareness of human beings; they are clever simulations of life.

As such, they can be viewed as either brilliant design and engineering achievements, or as highly manipulative, deceptive devices that encourage humans to treat machines as having feelings, where none exist. A computer programmed to make people feel they should take care of it is, in some ways, a dangerous – even sinister – concept, outside of the world of toys, at least.

The German university’s research perhaps reveals this fact more than any other.

Disclosure: Internet of Business editor Chris Middleton, author of this commentary, owns the well-known NAO robot, ‘Stanley Qubit’. He has no relationship with SoftBank Robotics.

Malek Murison: Malek Murison is a writer, editor and tech journalist based in London. www.malekmurisonmedia.com
Related Post