Over 160 AI companies and organisations from 36 nations have signed a pledge to “neither participate in, nor support, the development, manufacture, trade, or use of lethal autonomous weapons”.
The pledge, which has also been signed by 2,400 interested individuals from 90 countries, begins with the statement: “Artificial intelligence (AI) is poised to play an increasing role in military systems. There is an urgent opportunity and necessity for citizens, policymakers, and leaders to distinguish between acceptable and unacceptable uses of AI.”
Signatories include: Google DeepMind; University College London; the XPRIZE Foundation; ClearPath Robotics/OTTO Motors; the European Association for AI (EurAI); the Swedish AI Society (SAIS); Demis Hassabis; British MP Alex Sobel; Tesla and SpaceX CEO Elon Musk; Stuart Russell; Yoshua Bengio; Anca Dragan; and Toby Walsh, Scientia professor of Artificial Intelligence at the University of New South Wales.
Max Tegmark, president of the Future of Life Institute (FLI) which organised the campaign, announced the pledge today in Stockholm, Sweden, during the annual International Joint Conference on Artificial Intelligence (IJCAI), which has attracted over 5,000 of the world’s leading AI researchers.
Tegmark said, “I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect.”
Lethal systems
Lethal autonomous weapons are those that can identify, target, and kill a person, without a human being ‘in the loop’ of the decision to use lethal force. The technology is distinct from many of today’s drones, which are under human remote control, and autonomous systems that are designed to take down missiles and other unmanned weapons.
“AI has huge potential to help the world – if we stigmatise and prevent its abuse,” continued Tegmark. “AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way.”
Another of the organisers, the University of New South Wales’ Walsh, explained the ethical drivers behind the pledge: “We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organisations to pledge to ensure that war does not become more terrible in this way.”
Ryan Gariepy, founder and CTO of both Clearpath Robotics and OTTO Motors, said: “Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world. No nation will be safe, no matter how powerful.
“Clearpath’s concerns are shared by a wide variety of other key autonomous systems companies and developers, and we hope that governments around the world decide to invest their time and effort into autonomous systems which make their populations healthier, safer, and more productive, instead of systems whose sole use is the deployment of lethal force.”
In addition to the ethical questions associated with the development of lethal autonomous weapons, advocates of an international ban are concerned that such weapons will be difficult to control, easier to hack, more likely to end up on the black market, and therefore easier for ‘bad actors’ to obtain.
This could be destabilising for all countries, as illustrated in the FLI-released video, Slaughterbots (see below).
The UN dimension
In December 2016, the UN’s Review Conference of the Convention on Conventional Weapons (CCW) began formal discussions on the issue. At the most recent meeting in April, twenty-six countries announced support for some type of ban, including China.
The next UN meeting on lethal autonomous weapons will be held in August, and signatories hope that the pledge will encourage lawmakers to work towards an international agreement.
As the pledge states: “We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons.
“We ask that technology companies and organisations, as well as leaders, policymakers, and other individuals, join us in this pledge.”
Internet of Business says
The momentum behind this welcome initiative has been building for some time, with a number of organisations calling for the same basic outcome.
Richard Moyes is managing director of Article 36, a not-for-profit organisation working to prevent the unintended, unnecessary, or unacceptable harm caused by all kinds of weapons systems.
The core issue for his organisation is the use of sensors in weapons, and the lethal autonomous systems that may eventually arise from such applications.
Speaking earlier this year at the Westminster eForum event on AI policy, Moyes explained that one of the challenges in this field is that each of the steps towards a morally dubious outcome may seem reasonable in isolation.
However, the big-picture issue is the dilution of human control, and therefore of human moral agency, he said: “The more we see these discussions taking place, the more we see a stretching of the legal framework, as the existing legal framework gets reinterpreted in ways that enable greater use of machine decision-making, where previously human decision-making would have been assumed.”
In other words, the core ethical dilemmas in military applications of AI are not so different to the technology’s applications in other walks of life. The more systems become autonomous, the less human beings are involved and the more difficult any application or interpretation of existing law then becomes.
Moyes’ organisation believes that the solution is to create an obligation for meaningful human control – in other words, machine autonomy should not be permitted in any decision to take human life.
The FLI pledge comes as a number of companies are reassessing their ethical positions on a range of technologies, as controversy grows about the relationships between technology companies and unpopular government programmes.
For example, in recent weeks Google announced its intention to pull out of the Pentagon’s Project Maven contract, which is using AI to detect ‘objects of interest’ (potential targets) from drone footage.
Google made the move after pressure from its own employees, as well as from a number of external organisations. In June, the company issued its own pledge for ethical AI development, which included a commitment that its technologies will not be used in weapons systems.
Meanwhile, Microsoft and Salesforce.com have been criticised for their relationships with US immigration authorities in the wake of the Mexican border scandal, which saw children separated from their parents. Both companies made public statements condemning the policy, and said that their technologies were not deployed in support of it.
Amazon, Microsoft, and others have also found themselves in the frame over the use of real-time facial recognition systems by law enforcement agencies, forcing Microsoft to take the unusual step of asking the US government to regulate such systems.