Propaganda chatbots and manipulative AI: Worse to come, says MIT

Propaganda chatbots and manipulative AI: Worse to come, says MIT

Internet of Business says

The political landscape on either side of the Atlantic has rarely felt as polarised as it does today. Whatever your politics, the divisive tone of public discourse suggests that all solutions are binary. The centre ground of politics, which seemed to flourish in the 90s and early years of this century – socially liberal but economically conservative – now seems like a bygone age.

Some argue that this is a simple acknowledgement that the status quo could not exist for long in the wake of the 2008-09 recession, but few predicted the paroxysms of Brexit or the 2016 US Election. Many felt that something needed to change, but were surprised – or alarmed – by what eventually did.

But others suggest that political discourse has become increasingly polarised in line with the emergence of social media as our primary platforms for discussion. This is because the newsfeeds of Twitter, Facebook, Instagram, and the rest, are largely populated with images, articles, and memes based on a user’s engagement history.

These platforms fight for our attention by giving us more of what we’ve Liked in the past. As a result, we are all getting lost in a hall of mirrors that reflect our own images back at us, distorted. Or an echo chamber in which we only hear voices we agree with, condemning things we vehemently oppose.

The net result is that we are becoming increasingly entrenched in our positions. Put another way, many of us now receive all of our information about current affairs through channels that are made just for us, in our own image.

Inevitably (at least, with the benefit of hindsight), the way in which social platforms enable this form of manipulation – in order to target advertising – has been harnessed by those seeking to distort political discourse for their own ends.

For example, Macedonian teenagers are thought to have made a fortune in advertising revenue by publishing fake stories around the 2016 US election, and Cambridge Analytica of course profited from targeted political campaigns based on Facebook user data.

Propaganda bots

Some academics are now warning that those attacks on the democratic process are only going to get smarter and more subversive, particularly as the anonymity of many social media users makes it tough to separate fact from fiction online.

The University of Oxford’s Computational Propaganda Project has studied countless examples of social media manipulation, recently publishing a report arguing that “the manipulation of public opinion over social media platforms has emerged as a critical threat to public life”.

In an article for MIT Technology Review, Lisa-Maria Neudert, doctoral candidate at the Oxford Internet Institute and a researcher with the Computational Propaganda Project, suggests that the increasing sophistication of bot accounts – automated, AI-powered feeds masquerading as real people – means that worse is still to come.

It’s a straightforward process for nation states and political campaigns to build an army of bot accounts that will amplify certain viewpoints online.

And it’s not just about repetitively posting fake news or extremist opinions. It can be more subtle than that: sharing and Liking content from genuine accounts, adding to the pool of interactions, thereby gaming the algorithms and fanning the flames of controversy.

At the moment, it’s relatively easy to spot a fake social media account, just as it’s easy to detect YouTube videos compiled by AIs and narrated by speech synthesis systems.

Fake accounts tend to be triggered by keywords and engage with boilerplate responses. The telltale signs include clunky language, repetitive posts, a default profile picture – and perhaps staunch support for Vladimir Putin; all obvious clues. Twitter has moved to take down millions of suspicious accounts in the past year.

But these profiles will become smarter and more evasive over time, particularly given advances in natural-language processing. One fear is that the type of AI technology driving Amazon’s Alexa, Google’s Duplex, and Microsoft’s Cortana could help bots pass themselves off as human with increasing ease.

Many tech giants have made open-source algorithms for natural-language processing available to developers, opening the door to a new wave of convincing propaganda bots.

Video is also increasingly easy to fake, with AI and image manipulation systems able to create fairly convincing footage of someone saying words they never said. As such technologies become more sophisticated, it will become harder to spot the fakes, meaning that we may begin to lose the ability to distinguish evidenced fact from fiction.

The video below demonstrates just such a technology in action. Linked with a propaganda chatbot, it’s possible that the manufacture of this type of fake footage could become completely automated – a production line of fakes masquerading as real-life clips:-

The future of social media manipulation

Neudert argues that conversational bots will soon become more targeted and “seek out susceptible users and approach them over private chat channels. They’ll eloquently navigate conversations and analyse a user’s data to deliver customised propaganda. Bots will point people toward extremist viewpoints and counter-arguments in a conversational manner.”

Rather than broadcasting propaganda to everyone, these bots will direct their activity at influential people or political dissidents, she says. They’ll attack individuals with scripted hate speech, overwhelm them with spam, or get their accounts shut down by reporting their content as abusive.

It’s a troubling prospect. Since 2010, political parties and governments have themselves ploughed over half a billion dollars into social­ media manipulation. And it seems as though the industry is only just getting started.

Social media platforms have enabled free speech and debate on a scale never seen before. But we were naive to assume it would only be humans taking part.