Mind-reader: MIT’s AlterEgo wearable knows what you’re going to say

A wearable being developed at MIT’s Media Lab knows what its wearer is going to say before any sound is made.

The AlterEgo device uses electrodes to pick up neuromuscular signals in the jaw and face that are triggered by internal verbalisations – all before a single word has been spoken, claim MIT’s researchers.

Every one of us has an internal monologue of sorts, a place where our most intimate thoughts come and go as they please. Now, thanks to sophisticated sensors and the power of machine learning, the act of saying words in your head might not be so private after all.

MIT believes that the simple act of concentrating on a particular vocalisation is enough to engage the system and receive a response, and it has developed an experimental prototype that appears to prove it.

To ensure that the conversation remains internal, the device includes a pair of bone-conduction headphones. Instead of sending sound directly into the ear, these transmit vibrations through the bones of the face to the inner ear, conveying information back to the user without interrupting the normal auditory experience.

Read more: Apple hires Google AI chief to head machine learning | Analysis

The benefits of silent speech

Arnav Kapur, the graduate student who is leading development of the new system at MIT’s Media Lab, wants to augment human cognition with more subtlety than today’s devices allow for. “Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways, and that feels like an internal extension of our own cognition?” he said.

Kapur’s thesis advisor, Professor Pattie Maes, points out that our current relationship with technology – particularly smartphones – is disruptive in the negative sense. These devices demand our attention and often distract us from real-world conversations, our own thoughts, and other things that should demand greater attention, such as road safety.

“We basically can’t live without our cellphones, our digital devices,” she said. “But at the moment, the use of those devices is very disruptive. If I want to look something up that’s relevant to a conversation I’m having, I have to find my phone and type in the passcode and open an app and type in some search keyword, and the whole thing requires that I completely shift attention from my environment and the people that I’m with to the phone itself.”

The challenge is to find a way to alter that relationship without sacrificing the many benefits of portable technology.

“So, my students and I have for a very long time been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present,” she said.

Read more: MITs CSAIL lab studies aquatic life with robot fish

The potential of AlterEgo

Instead of being a precursor to some kind of Orwellian dystopia, the MIT team believes that the technology, once perfected, could improve the relationship between people and the devices they use, as well as serving a variety of practical functions.

So far the device has been able to surreptitiously give users information on the time and solve mathematical problems. It’s also been given wearers the power to win chess games, silently receiving opponents’ moves and offering computer-recommended responses, claims MIT.

The team is still collecting data and training the system. “We’re in the middle of collecting data, and the results look nice,” Kapur said. “I think we’ll achieve full conversation some day.”

The platform could one day provide a way for people to communicate silently in environments where noise is a concern, from runway operators to special forces soldiers. And it could perhaps even open up a world of verbal communication for people who have been disabled by illness or accident.

Read more: Health IoT: New wearable can diagnose stomach problems

Internet of Business says

The rise of voice search in the US – where 20 percent of all searches are now voice-triggered, according to Google – together with the rapid spread of digital assistants, such as Siri, Alexa, Cortana, Google Assistant, and IBM’s new Watson Assistant, has shifted computing away from GUIs, screens, and keyboards. And, of course, smartphones and tablets have moved computers off the desktop and out of the office, too.

However, while voice is the most intuitive channel of human communication, it isn’t suitable for navigating through, and selecting from, large amounts of visual data, for example, which is why technophiles are always drawn back to their screens.

This new interface will excite many, and may have a range of extraordinary and promising applications. But doubtless it will alarm many others as the rise of AI forces us to grapple with concepts such as privacy, liability, and responsibility.

 

And let’s hope, too, that this technology doesn’t always translate what’s on human beings’ minds into real-world action or spoken words, as the world could become a bizarre place indeed.

In the meantime, transhumanists will see this as yet another example of the gradual integration of technology with biology – and with good reason. But whether these innovations will encourage us to become more human, and less focused on our devices, is a different matter; arguably, such devices may train human beings to think and behave in more machine-like ways to avoid disorderly thinking.

Meanwhile, thoughts that can be hacked? Don’t bet against it.

Read more: AI regulation & ethics: How to build more human-focused AI

Read more: Fetch launches world’s first autonomous AI smart ledger

Malek Murison: Malek Murison is a writer, editor and tech journalist based in London. www.malekmurisonmedia.com
Related Post