MIT introduces brain-controlled robots
MIT brain-controlled robot
credit: MIT

MIT introduces brain-controlled robots

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are leading research into controlling robots using brainwaves and simple hand movements.

Their work is designed to ease the complex challenges of robotics control systems, which often require dedicated programming and even language-processing capabilities.

The new system allows a robot’s supervisor to correct mistakes instantly and more intuitively, using a combination of gestures and brain signals, paving the way towards fully brain-controlled robots, and countless new applications.

According to MIT, the system can detect in real time if a user has noticed the robot make an error. An interface that measures muscle activity then picks up the operator’s hand gestures to navigate through a list of options and select the necessary corrective actions.

The team’s previous research created a brainwave feedback system, using new machine-learning algorithms that could handle simple binary-choice tasks. They are now building on that work to include multiple-choice activities, vastly increasing the potential scope of robotic systems that are controlled in this way.

A brainwave moment

MIT’s latest experiment saw a humanoid robot known as Baxter, from Rethink Robotics, use a power drill on one of three possible targets, controlled by an operator using the new interface.

CSAIL director Daniela Rus, who supervised the work, commented on the research’s implications:

This work combining electroencephalography (EEG) and electromyography (EMG) feedback enables natural human-robot interactions for a broader set of applications than we’ve been able to do before, using only EEG feedback. By including muscle feedback, we can use gestures to command the robot spatially, with much more nuance and specificity.

The technology is even able to pick up the brain signals of new users, meaning that future applications could be quickly deployed in the future without the need for lengthy and expensive training.

Most previous work in the area has generally involved systems that could only recognise brain signals when people trained themselves to think in specific ways – by looking at different light displays that represented a certain robot task, for example. However, those kinds of constraints are impractical in most real-world deployments.

The latest MIT research has been able to bypass this problem by using specific brain signals known as error-related potentials (ErrPs), which can be detected when people notice mistakes. When the system spots such a mistake, it stops to await further instruction from its supervisor.

Lead author and PhD candidate Joseph DelPreto sees this a significant breakthrough:

What’s great about this approach is that there’s no need to train users to think in a prescribed way. The machine adapts to you, and not the other way around.

“By looking at both muscle and brain signals, we can start to pick up on a person’s natural gestures, along with their snap decisions about whether something is going wrong,” he explained. “This helps make communicating with a robot more like communicating with another person.”

Internet of Business says

While EEG and EMG signal detection methods are, at present, still relatively restricting, combining the two technologies enables more complex and practical use cases.

It also avoids users having to engage in mentally taxing, arbitrary thought processes in order to generate the types of brainwaves that a robot can act on. It does this by combining the strength of the two methods – EEG, for its speed and intuitiveness, and EMG for its ease of use and broad applications.

The future scope of simpler, more intuitive robot control means that this research has huge potential value. The ability for operators in factories, construction sites, surgeries, farms, and other increasingly automated environments, to control advanced robots with their thoughts and hand gestures holds enormous potential for productivity gains, with just a little training. The hands-free nature of the technology may also enable users to carry out other tasks simultaneously.

As highlighted by the research itself, the elderly, and those with language disorders or limited mobility, could also benefit from access to this technology, as assistive robotics enters more and more health and social care environments.

Inevitably, there will be popular fears about ‘mind-reading’ robots, in a media that is constantly second-guessing nearly all applications of robotics and AI. However, it’s fair to say that a scenario in which it’s possible to have seamless, unspoken comprehension between an advanced robot and its operator is both powerful, and potentially open to abuse.

We’re a long way off from that, but research such as this suggests that it’s a vision that could one day move out of the pages of science-fiction and into the real world. That will bring with it legal and moral questions around culpability, privacy, and the concept of consciousness itself. These may well mirror the technology itself in their complexity.