Cambridge Analytica vs Facebook: Why AI laws are inadequate

Cambridge Analytica CEO Alexander Nix speaks at Online Marketing Rockstars (OMR) in Hamburg, Germany, 3 March 2017. Photo by: Christian Charisius

UPDATED 20 March 2018

News that Facebook has suspended the artificial intelligence, analytics and ‘strategic communications’ firm Cambridge Analytica from its platform has sent shockwaves around the world.

But it should also have rung alarm bells about the potential for AI and analytics to be deployed without data subjects’ consent – an issue that will remain a grey area for as long as our legal systems are left playing catchup with the technologies’ advances.

According to reports published in the Observer and Guardian newspapers in the UK, Cambridge Analytica, directly or indirectly, harvested the data from 50 million Facebook profiles. The aim was to build algorithms that could both predict and influence voting behaviour, based on apparently unrelated information, according to a company insider.

The company has denied using Facebook data in its US election campaigns.

The British company is backed by hedge fund billionaire Robert Mercer, and worked on both Ted Cruz’s 2015 election campaign in the US, and on Donald Trump’s successful run for the White House.

Whistleblower Christopher Wylie told the Observer and Guardian newspapers, “We built models to exploit what we knew about [US voters] and target their inner demons. That was the basis the entire company was built on.”

In a video for the Guardian, he described the programme as a “grossly unethical experiment” that was “playing with the psychology of an entire country, without their consent or awareness”.

Observer reports claim that the social network knew about the problem as far back as 2015, but took limited action until recently. Facebook has denied this.

Cambridge Analytica responded to Facebook’s ban, saying: “Cambridge Analytica fully complies with Facebook’s terms of service and is currently in touch with Facebook following its recent statement that it had suspended the company from its platform, in order to resolve this matter as quickly as possible.”

It added: “Cambridge Analytica only receives and uses data that has been obtained legally and fairly. Our robust data protection policies comply with US, international, European Union, and national regulations.”

The company sought to blame its partner, Global Science Research (GSR), for the debacle, claiming, “In 2014, we contracted a company led by a seemingly reputable academic at an internationally-renowned institution to undertake a large scale research project in the United States.

“This company, Global Science Research, was contractually committed by us to only obtain data in accordance with the UK Data Protection Act and to seek the informed consent of each respondent.”

GSR is a company set up by Cambridge University scientist Dr Aleksandr Kogan, who the Guardian and Observer claim received funding from the Russian government to study people’s behaviour on social networks, in his capacity as an associate professor at St Petersburg University. Cambridge student newspaper Varsity has published its own independent report on Kogan here.

Wylie alleges that Kogan developed the AI and analytical tools that Cambridge Analytica used to influence US voters. These reportedly deployed microtargeting and psychographics techniques to create campaigns that were designed to appeal to individuals’ or groups’ beliefs and preferences, he said.

The firm is also known to have worked on behalf of the Leave campaign in the UK’s Brexit referendum.

Cambridge Analytica CEO, Eton and Manchester University educated Alexander Nix, has worked on more than 40 political campaigns in the US, the Caribbean, South America, Europe, Africa, and Asia over the past nine years, according to marketing and media publication, Campaign.

• The UK’s Channel 4 News ran an exposé of working practices within the company’s political campaigns on 19 March 2018. Please click here for a link to a video from the programme. The undercover report undermines Cambridge Analytica’s positioning as a purely data-focused AI/analytical service, by exposing a culture of deliberate political manipulation.

In the Channel 4 report, Mark Turnbull, MD of Cambridge Analytica Political Global is filmed saying, “We just put information into the bloodstream of the internet, and then watch it grow, give it a little push every now and again… like a remote control. It has to happen without anyone thinking ‘that’s propaganda’, because the moment you think ‘that’s propaganda’, the next question is, ‘who’s put that out?’.”

Once more unto the breach

Cambridge Analytica’s alleged deployment or breach of Facebook data should be seen in the context of multiple reports last year about Russian troll farms using social platforms such as Facebook, Twitter, and Instagram, to foment political dissent in the US, the UK, and elsewhere.

In September 2017, Facebook admitted to Congress that hundreds of fake accounts had spent an estimated $100,000 on 2016 campaigns to stir up protests in the US and elsewhere on a variety of issues, including race relations and gun control.

The Cambridge Analytica story has been portrayed as an unprecedented ‘data breach’, but really it should stand as a warning that existing laws are little more than shouting into a hurricane when it comes to these types of AI and analytics deployments.

Nix reveals the depth of CA’s US voter data.

Problems like this will only get more commonplace until better legal protections are put in place to prevent any organisation from using AI to abuse public trust in technology platforms, such as Facebook, or NHS systems.

This is particularly true when many types of AI, such as neural networks, remain inscrutable ‘black box’ solutions.

At the centre of this legal minefield are two issues: consent, and AI’s predictive abilities – in other words, the potential for organisations to use AI to predict things about a person that the data subject may not be aware of, or may not have agreed to share.

Existing data protection rules have proved too weak or ambiguous to prevent organisations and technology companies from harvesting data from systems in which people have placed their trust.

Arguably, AI’s predictive abilities call into question the very concepts of privacy and secret ballots, if systems are able to infer private beliefs or behaviours from the data shared on technology platforms.

In such a world, informed consent and transparent terms and conditions will be essential, but as yet there is no standard platform – such as a personal API – which allows citizens to manage either their consent, or their personal data.

As a result, most people agree to Ts and Cs that they have never read, across multiple technology platforms, in order to access tools to help them collaborate.

One of those tools, Facebook, has become the ultimate data sandbox for AI and analytics systems to play in – including those that are backed by political agendas and big money.

Whether it accessed Facebook data or not, Cambridge Analytica is backed by both political agendas and big money, both in investment terms and by accepting multimillion-dollar commissions from political candidates.

That said, Facebook can hardly complain about organisations playing in its sandbox, because that is precisely what it does itself. Data about its two billion registered users, and one billion active ones, is what drives Facebook’s $40 billion (2017) advertising revenues.

At IPO, Facebook was making less than one dollar per active user in revenue; it is now making over $40: clearly the data sandbox works.

It has been suggested that either a personal API platform or blockchain-based systems could allow people to take back control over their own data and its management.

Read more: Opinion: Use blockchain to build a global data commons

The China syndrome

Facebook’s only rival in the AI/data sandbox game is a country: China, whose one billion-plus population is roughly the size of the social network’s active user base. Those citizens will soon be the subject of a compulsory social ratings system, using data harvested by AI, facial recognition systems, and more, thanks to technology partners such as Alibaba and Megvii / Face++.

In 2017, Face++ received Chinese government-led funding of some $460 million: more than the UK’s entire central investment in AI and robotics over the next three years.

In 2020, China will introduce its mandatory system to rate the trustworthiness of its own people. The scheme will be all-encompassing, covering citizens’ creditworthiness, medical history, shopping habits, friend networks, and more.

Arguably, this is the kind of mass deployment of technology that GDPR is designed to prevent happening in Europe, and which could only be deployed by governments in the West on national security grounds – as the UK’s state surveillance scheme has been.

By awarding low scores for bad behaviour – and using AI to infer intent – China hopes that the scheme won’t just monitor people’s behaviour, but also influence it via reward schemes.

Under the programme, citizens with high scores will benefit from state loans, faster check-in at airports, prominence on dating sites, and more, while penalties for poor social rankings will include slower internet speeds, travel bans, and even removal of the right to buy goods.

The Chinese government claims the programme will forge “a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity, and the construction of judicial credibility.”

“If trust is broken in one place, restrictions are imposed everywhere,” adds the policy, which will “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step”.

When the law is an ass

So with stories such as Cambridge Analytica and China’s social ratings programme spreading, it’s clear that AI’s predictive abilities are the area where the law is weakest and, arguably, needs reform.

For example, experiments have already been carried out to determine if an AI system can predict someone’s sexuality based on a photograph, and separately to infer the likelihood of someone having a variety of different medical conditions.

Questions then arise, such as: does any organisation have a right to this information without the subject’s knowledge or informed consent? To which the answer, currently, is: what unambiguous law prevents an organisation from simply Googling an individual and then applying AI to whatever data they find? Or using AI to analyse the social profiles of 50 million US voters, or trawl through 1.6 million NHS records?

The follow-up questions must then be: what if the AI algorithm has been trained with biased training data, makes the wrong predictions, or is statistically right only a percentage of the time? What rights or legal redress does the data subject have in any of these cases, if they have been adversely affected by AI or by automated decision-making?

There are yet more questions. Such as, does the fact that data might be shared on a social platform make it fair game for the application of predictive AI? And where does the public domain begin and end? For example, should someone’s friend network be regarded as public, or private?

These are just some of the questions that future laws must address.

Until then, the potential use of such algorithms by insurance companies, banks, mortgage lenders, or even governments, could lead to people being denied services without having any idea why.

Lord Clement-Jones.

These questions have already been raised in Parliament. In September 2017, Lord Clement-Jones, chair of the UK’s Parliamentary Select Committee on the ethical and social implications of AI, said: “How do we know in the future, when a mortgage, or a grant of an insurance policy, is refused, that there is no bias in the system?

“There must be adequate assurance, not only about the collection and use of big data, but in particular about the use of AI and algorithms. It must be transparent and explainable, precisely because of the likelihood of autonomous behaviour. There must be standards of accountability, which are readily understood.”

Fog of responsibility

Andrew Joint, managing and commercial technology partner at law firm Kemp Little, raised similar concerns at a Westminster eForum AI conference in February.

Speaking at the event, Joint suggested that the rise of AI calls into question longstanding legal principles such as liability, duty of care, and criminal conviction – principles that are a “codification of our moral and ethical standpoints”.

In other words, if an autonomous car kills someone, or a robot doctor prescribes the wrong medicine, who is responsible and who is liable? Can a machine be said to have a duty of care?

There are further legal questions. For example, where might the fault lie: in the algorithm itself, or with the training data, the sensors, the hardware, the architecture, the system design, the manufacturer, or the customer’s implementation?

The problem with AI and autonomous systems in general that there is a legal and ethical fog where once there was clarity. The absence of clear human responsibility may make it impossible to establish why decisions have been made, and by whom (or what). And that means no liability, and potentially no redress for consumers.

However, Joint suggested that there is growing awareness of, and demand for, better legal protections to be put in place.

“A wave is building momentum,” he said. “There is going to be rising demand for people to be able to show their workings in terms of how decisions were made, how predictions were reached, and what was done with data at a certain point.

“They’re going to need, from a data privacy point of view, to be able to explain what happened with somebody’s data, and demonstrate why decisions were made in relation to that data, and why predictions were made.”

But we’re not there yet.

Read more: Top priest shares ‘The Ten Commandments of A.I.’ for ethical computing

Internet of Business says

AI – as Internet of Business has reported many times – can be a transformative technology, one that helps heal the sick, detect cancer, predict extreme weather, or help cities to be run more sustainably. But it is also open to abuse, misuse, or cynical manipulation. The risk is that some applications of AI may be designed – deliberately or accidentally – so that no one is responsible, culpable, or liable when something goes wrong, or when trusted platforms are abused.

While the law struggles to catch up, we live in a world of increasingly automated politics, backed by hedge fund billionaires, wealthy candidates, offshore ‘actors’, and even national governments.

Welcome to the world of AI ethics. And good luck.

For some potential solutions to this problem, please read Joanna Goodman’s excellent report for Internet of Business.

Read more: American public fears AI’s impact on employment, says Syzygy

Read more: Vendors, users ignoring IoT security in rush to market – report

Chris Middleton: Chris Middleton is former editor of Internet of Business, and now a key contributor to the title. He specialises in robotics, AI, the IoT, blockchain, and technology strategy. He is also former editor of Computing, Computer Business Review, and Professional Outsourcing, among others, and is a contributing editor to Diginomica, Computing, and Hack & Craft News. Over the years, he has also written for Computer Weekly, The Guardian, The Times, PC World, I-CIO, V3, The Inquirer, and Blockchain News, among many others. He is an acknowledged robotics expert who has appeared on BBC TV and radio, ITN, and Talk Radio, and is probably the only tech journalist in the UK to own a number of humanoid robots, which he hires out to events, exhibitions, universities, and schools. Chris has also chaired conferences on robotics, AI, IoT investment, digital marketing, blockchain, and space technologies, and has spoken at numerous other events.
Related Post