Q&A: SEED Project founder Mark Meadows on the risks behind conversational UIs

credit: SEED Project

With increasing regularity, bots are helping us get things done by having conversations with us, understanding our needs, and triggering the appropriate actions. But there are concerns in some quarters that the information these bots garner about us, and the inferences they can make from that information, creates new data that can be used in ways we might not have intended.

Interested in the question of who owns this new data, and how it is used, Mark Stephen Meadows set up SEED Project, an independent, decentralised marketplace for developers and deployers of conversational user interfaces (CUIs). Internet of Business caught up with Mark about SEED Project and its aims.

Internet of Business:  Can you briefly explain what a ‘conversational interface’ is, what it does and the ways in which it is used?

Mark Stephen Meadows: “A conversational interface is a way for humans to interact with data in the most natural way, according to the way we’ve always interacted with other people – which is primarily speech.

“With a conversational interface the speech, tone of voice and gestures trigger interactivity with a system, typically an AI-based system. As AI begins to advise us on ever-more areas of our lives, it will be conversational interfaces that become our means of interactivity with those systems.

“Conversational user interfaces are different from chatbots, which are likely to be relegated to the bottom corner of websites. ‘Assistants’ or ‘CUIs’ are already begging to surround us and here we’re really talking about those multimodal voice, video bots rather than simple text chatbots (which don’t collect all this new data).”

Conversational user interfaces are increasingly sophisticated. Can they learn things about us outside of the ‘facts’ of a conversation?

“Absolutely, and this is one of the most important points the world must understand as these systems proliferate. The facts of any conversation are embedded in the method of presentation and subtleties of the interaction. For example, where we are, where we come from, what we are likely to be deciding, and, most importantly, why can all be understood by analysing the ‘affect’ and emotive data that CUIs collect so effectively.

These new data types are truly revolutionary for understanding user behaviour and decision making, which is why the world has a responsibility to ensure CUIs are designed ethically.

Are there ethical issues about how this information could be used?

“From a theoretical perspective, whenever there is information asymmetry then an imbalance of power emerges and when that happens we’re very quickly into an ethics discussion. CUIs provide the owners of those systems with a huge information advantage and how that data is used is of huge concern as it influences how we make decisions.

“An early example, (and this is a company doing it for the right reasons) is Ellipsis Health in San Francisco, which uses machine learning to analyse audio recordings of conversations between doctors and patients during appointments. The software works as a screening tool to flag patients whose speech matches the voice patterns of depressed individuals, alerting clinicians to follow up with a full diagnostic interview.

“The program was trained by taking millions of conversations between non-depressed individuals and mining them for key features in speech patterns, such as pitch, cadence and enunciation.

Conversational user interfaces could be used to identify new data about users’ personality and background (credit: SEED Project)

“Similarly, The Priori app from the National Institute of Mental Health in the US runs in the background on an ordinary smartphone, and automatically monitors a patient’s voice patterns during calls to alert bipolar patients to an impending change in mood. Clearly this is another ethical example.

“But we must realise this type of powerful and intimate understanding of individuals through voice interaction will become the norm, and that’s why it is so important that CUIs are ethical by design.

“Imagine a 55 year old black woman from Oakland applying for health insurance via a CUI, such as a videobot. The system could take genetic sampling from the appearance of her face and ask, for example, do people with her shape of ear tend to suffer more heart attacks? Or do people with her specific eye colour tend to contract cancer? These are the type of models that are being built now and with CUIs there’s the means to factor them in so that the privacy, fairness, and use of the data is symmetric.

We’re working hard at SEED to build a platform for CUIs to be designed and launched that protects user privacy and where the bots are authenticated and trustworthy.

Currently, people don’t necessarily know they are talking to a bot. Should there be more openness about when bots are being used, and should people be able to see data that is collected, and inferences that are made about them from analysing that data?

“We always recommend bots are purposely designed so they don’t resemble humans. CUIs we’ve built have had cartoon style avatars and we usually adjust the voice to be a little off human. Why? Because these AI systems have a growing influence over us and there’s a fine line between a system that makes our lives easier and one that manipulates.

Google Duplex has shown us that we can no longer trust the human voice on the phone. When you also consider Adobe Voco, which can take sections of a person’s voice and splice them together to create a statement with a completely different meaning, it’s clear we are facing a future in which the ability to trust the system we’re speaking with is of primary importance.

“The SEED platform incorporates blockchain and is designed specifically so that CUIs built on the platform are identified, authenticated and certified. To enable this, each CUI is a unique entity with its own identifying criteria, including who designed and built it, which is logged on the blockchain.

“To be authenticated each SEED CUI is then verified on the network to ensure it is indeed the bot it claims to be. Part of the problem with a bot built on the Alexa Skills Store is that people think they’re only talking with Amazon, they don’t realise data is shared with third parties too.

“We’re not there yet, but in our view, being certified would require the creator of the CUI being proven trustworthy enough to handle the data the bot collects. Clearly, this is a big issue and one where we welcome discussions with policy makers and regulators. We believe it will come in time though.”

Are bots better at conversations than humans?

“I believe humans will always be better at the subtleties of conversation. But bots can soak up much more information than we can, they can get to the hidden meaning in that data and they never forget any of it.

“There’s a great line from the film AI, ‘It’s not whether you love her or not, it’s whether you make her feel you love her.’

“Today’s bot designers tend to be authors, poets and word people. In the next 5 years that will change, and we already see psychologists entering the picture.

Bots will be designed so people get more and more satisfaction from interacting with them, to encourage more usage, and more data collection.

What does the law need to do to catch up?

“This is a tough one because what seems appropriate in Europe, really doesn’t to people in China. However, I believe we do need international standards that give people visibility into how their data is sorted, used and monetised.

“In the interim we’ve designed SEED so users can select the level of privacy they want when interacting with a bot built on the SEED platform, and if they do decide to share data they are rewarded for doing so.”

Sandra Vogel:
Related Post