Academics from Oxford, Cambridge, and Yale universities have united with OpenAI, the Electronic Frontier Foundation, and security experts to warn of the dangers of AI being used to attack individuals, organisations, and countries. Chris Middleton reports.
If world events have taught us anything in recent years, it’s that it is getting harder to tell fact from fiction. As technology forges ahead, flat-earthers and their like abound, dismissing verifiable science as fake or conspiracy.
But something else should be equally clear: malicious actors, such as hostile nations, agencies, or groups, are no longer merely interested in hacking our IT systems, but are also determined to use IT to hack our value systems, forcing us to question everything we think we know as a society.
In theory, artificial intelligence (AI) is designed to help us uncover hidden truths or patterns of behaviour. It’s 2018’s must-have technology, with industry giants such as IBM, Microsoft, Google, Oracle, Apple, Salesforce.com, and JDA competing to not only add AI to their portfolios, but also to refocus themselves on the technology.
Hostile forces
Applications are legion and most are beneficial. Yet while AI is booming alongside robotics and other connected technologies, far less attention is being paid to how these innovations might be deployed maliciously, warns a new report.
Drones and autonomous vehicles attacking people, AIs programming other AIs, and the creation of fake images, videos, and audio recordings are all among the predictions made in the report, which warns that technology’s ability to work much faster than human beings could make attacks hard to predict and to fend off.
The Malicious Use of Artificial Intelligence is no lightweight survey designed to shift product; this is a high-level, 101-page international study by, among others, the universities of Oxford and Cambridge, the Center for a New American Security, the Electronic Frontier Foundation, OpenAI, and Yale University.
The report proposes better ways to forecast, prevent, and mitigate the potential threats, and focuses on what types of attack we’re likely to see if adequate defences are not developed soon.
From a security angle, a number of technology innovations are of particular interest, it says. For instance, machine learning’s ability to recognise a target’s face and navigate through space could be applied to autonomous weapons.
Similarly, the ability to generate synthetic images, text, and audio could be used to impersonate people online, or to sway public opinion by distributing AI-generated content through social media channels. For example, the video below was created last year by a different group of researchers to demonstrate just how easily content can already be faked.
In some circumstances, these developments could even threaten the concept of prima facie legal evidence. The ability to fake camera footage risks counting as reasonable doubt in criminal cases.
Trumped-up protests?
Some of the report’s predictions may already have come true.
Whatever the outcome of investigations into Russia’s interference in the 2016 US Presidential Election, it has already been shown that troll farms and other automated systems have been used to sow political dissent on Twitter, Facebook, Instagram, and other social platforms.
In many cases these attacks have been designed to ramp up hostility towards minority or political groups, or to push for causes as varied as Brexit, Texan secession, white supremacy, and anti-NFL protests. Taken together, these incidents can be interpreted as being part of a concerted campaign to destabilise tolerant, diverse, and socially liberal viewpoints.
Of course, the technologies could equally be deployed in reverse to destabilise hostile regimes, but that surely proves the point: they may be used in anger by any individual, organisation, or state, and so the risk is real, political, and in need of urgent consideration by policymakers.
“These technical developments can also be viewed as early indicators of the potential of AI,” says the report of the general and anticipated trend in malicious AI deployment. “It will not be surprising if AI systems soon become competent at an even wider variety of security-relevant tasks.”
Miles Brundage, research fellow at Oxford University’s Future of Humanity Institute, said: “AI will alter the landscape of risk for citizens, organisations and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance, or privacy-eliminating surveillance, profiling and repression, the full range of impacts on security is vast.”
In 2016, the Institute’s Dr Anders Sandberg hit the headlines when he said, “If you can describe your job, then it can – and will – be automated”, adding that up to 47 percent of all jobs will be carried out by machines in the years ahead.
Brundage continues, “It is often the case that AI systems don’t merely reach human levels of performance, but significantly surpass it. It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labour.”
A call to action
His Cambridge counterpart, Dr Seán Ó hÉigeartaigh, executive director of the Centre for the Study of Existential Risk, added: “Our report is a call to action for governments, institutions, and individuals across the globe.
“For many decades, hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore, and suggests broad approaches that might help. For example, how to design software and hardware to make it less hackable, and what type of laws and international regulations might work in tandem with this.”
As AI capabilities become more powerful and widespread, the authors expect the growing use of AI systems to lead to the following changes in the threat landscape:-
• Expansion of existing threats. The costs of attacks may be lowered by the scalable use of AI systems to complete tasks that would ordinarily require human labour, intelligence, and expertise. “A natural effect would be to expand the set of actors who can carry out particular attacks, the rate at which they can carry out these attacks, and the set of potential targets,” says the report.
• Introduction of new threats. New attacks may arise through the use of AI systems to complete tasks that would otherwise be impractical for humans. “In addition, malicious actors may exploit the vulnerabilities of AI systems deployed by defenders,” it adds.
• Change to the character of threats. “We believe there is reason to expect attacks enabled by the growing use of AI to be especially effective, finely targeted, difficult to attribute, and likely to exploit vulnerabilities in AI systems,” say the authors.
Read more: IoT ramps up cyber security risk, says in-depth report
Recommendations
So what can be done about what appears to be big bang of cyber risk? The report makes four high-level recommendations:
1. Policymakers should collaborate with technical researchers to investigate, prevent, and mitigate any potentially malicious uses of AI.
2. AI researchers and engineers should take the dual-use nature of their work seriously, allowing the risk of misuse to influence research priorities. More, they should proactively reach out to “relevant actors” when harmful applications are foreseen.
3. Best practices should be identified in other research areas, such as computer security, and these should be imported into AI research.
4. [Governments and industry should] seek to expand the range of stakeholders and domain experts involved in discussing all of these challenges.
In addition to these high-level recommendations, the report also proposes exploring a number of “open questions and potential interventions” within four key research areas:-
• Learning from, and with, the cybersecurity community. At the “intersection of cybersecurity and AI attacks”, the authors highlight the need to explore and implement red teaming, formal verification, responsible disclosure of vulnerabilities, new security tools, and more secure hardware.
• Exploring different openness models. As the dual-use nature of AI and machine learning becomes apparent, the report highlights the need to rethink traditional concepts about openness in research. This should start with pre-publication risk assessments, and follow up with licensing, sharing regimes that favour safety and security, and other lessons from dual-use technologies.
• Promoting a culture of responsibility. AI researchers and the organisations that employ them are in a unique position to shape the security landscape of the AI-enabled world. The report highlights the importance of education, ethical statements and standards, and discussing society’s expectations of research workers.
• Developing technological and policy solutions. In addition to all of the above, the report looks at a range of promising technologies, as well as policy interventions, that could help build a safer future with AI. Full details of these can be found in the document
High-level areas for further research include: privacy protection; the coordinated use of AI for public-good security; monitoring of AI-relevant resources; and other legislative and regulatory responses.
These proposed interventions require attention and action – not just from AI researchers and companies, but also from legislators, civil servants, regulators, security researchers, and educators, says the report, before adding, “The challenge is daunting and the stakes are high”.
Internet of Business says
We welcome this extraordinary document and, alarming though it may be, endorse its findings.
However, some of the conclusions are depressing, not least of which is that the international, collaborative, open nature of academic research may be threatened by these recommendations – which would be ironic, given its deep academic origins. Forcing some aspects of AI research into more secure silos is a controversial idea, but one that comes with OpenAI’s implicit backing.
Last year, the RSA’s Age of Automation report echoed at least one of the findings of this document, saying that AI developers should pledge themselves to ethical development by signing the equivalent of a Hippocratic Oath.
But it must be said that some of the dangers of AI do not come from without, but from within.
Some organisations’ rush to apply AI as a tactical tool, rather than as a strategic business support technology, is troubling. Black-box solutions are inscrutable – and in that sense unaccountable – while the potential uses of AI to automate fraud, or (sometimes unconsciously) bias and discrimination are very real. This separate report (by IoB’s Chris Middleton) explores this in detail.
Users should not only consider deployments carefully in support of strategic goals, but also check their assumptions at the door.
Download the full report here.
Read more: Vodafone to trial air traffic control system for drones
Read more: Retail IoT: Shoppers demand AI, VR, and a better fit online
Read more: Glassbeam AI for health tech systems could cure expensive ailments