Google announces new ethical AI strategy

Google has unveiled a set of principles for ethical AI development and deployment, and announced that it will not allow its AI software to be used in weapons or for “unreasonable surveillance”.

In a detailed blog post, CEO Sundar Pichai said that Google would not develop technologies that cause, or are likely to cause, harm.  “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints,” he explained.

Google will not allow its technologies to be used in weapons or in “other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people”, he said.

Also on the no-go list are “technologies that gather or use information for surveillance, violating internationally accepted norms”, and those “whose purpose contravenes widely accepted principles of international law and human rights”.

How we got here

The move follows widespread internal and external criticism of Google’s involvement in Project Maven, the Pentagon’s aerial battlefield intelligence programme, which some saw as a step towards the weaponisation of AI. Several staff resigned from the company over the deal.

Earlier this week, Google confirmed that it will withdraw from the programme when the contract comes up for renewal in 2019.

However, Pichai said that the company is free to pursue other government contracts, including those in cybersecurity. “While we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”

“These collaborations are important, and we’ll actively look for more ways to augment the critical work of these organisations and keep service members and civilians safe,” he said.

Alongside Amazon and Microsoft, Google is thought to be in the running for Pentagon cloud services contracts worth up to $10 billion.

A new recovery programme

Pichai has announced a seven-step programme for future AI development at the company, which could be seen as a reputational recovery exercise, as much as a restatement of its “Don’t be evil” mantra. Not just in the wake of the Project Maven debacle, but also of other recent ventures, such as its Duplex programme, which is developing its AI assistant to emulate the subtleties of human speech.

The CEO said that, in future, Google will solely pursue innovations that are:

Socially beneficial
“We will strive to make high-quality and accurate information readily available using AI, while continuing to respect cultural, social, and legal norms in the countries where we operate,” he said. “And we will continue to thoughtfully evaluate when to make our technologies available on a non-commercial basis.”

Avoid creating or reinforcing unfair bias
“We recognise that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies,” he explained. “We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.”

In the past, Google image searches have sometimes reinforced cultural biases and stereotypes, which themselves reflected longstanding biases in media reports – on issues such as the gender of successful business people, for example, or perceived levels of criminality among ethnic and other minority groups. Google has adjusted its algorithms over the years to counterbalance those biases.

However, Internet of Business recently reported on an MIT research programme which revealed the extent to which machine learning systems are reliant on training data, meaning that identical AI systems will produce very different – and often biased – results, depending on the source data with which they have been trained.

In this sense, any technology that relies on large data sets can fall victim to confirmation bias and accidental (or deliberate) misapplication.

Are built and tested for safety
“We will design our AI systems to be appropriately cautious, and seek to develop them in accordance with best practices in AI safety research,” continued Pichai. “In appropriate cases, we will test AI technologies in constrained environments and monitor their operation after deployment.”

Are accountable to people
“We will design AI systems that provide appropriate opportunities for feedback, relevant explanations, and appeal. Our AI technologies will be subject to appropriate human direction and control,” he said.

Here Pichai is addressing the question of transparency and liability in AI systems. As more and more organisations rush to employ AI, the question of how and why decisions have been arrived at becomes critically important; many users may have to ‘show their workings’, should any AI-informed or automated decisions be shown to have an adverse impact on people’s lives.

Incorporate privacy design principles
“We will incorporate our privacy principles in the development and use of our AI technologies. We will give opportunity for notice and consent, encourage architectures with privacy safeguards, and provide appropriate transparency and control over the use of data,” said Pichai.

These comments come in the wake of GDPR’s introduction in Europe, privacy regulations that have persuaded some US technology providers – Microsoft, Apple, SugarCRM, Box, and Salesforce.com among them – that similar safeguards are needed in the US and elsewhere.

Uphold high standards of scientific excellence
“AI tools have the potential to unlock new realms of scientific research and knowledge in critical domains like biology, chemistry, medicine, and environmental sciences,” continued Pichai. “We will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications.”

Are made available for uses that accord with these principles
“Many technologies have multiple uses. We will work to limit potentially harmful or abusive applications,” he said.

Internet of Business says

The announcement capped a busy week for Google and other divisions of its parent, Alphabet. For example, driverless transport division Waymo announced yesterday that it plans to bring its autonomous taxi service to Europe after its US launch later this year.

Speaking at the Automotive News Europe Congress in Turin, Waymo CEO John Krafcik said, “There is an opportunity for us at Waymo to experiment here in Europe, with different products and maybe even with different go-to-market strategies. It’s possible we will take a very different approach here than we would in the US.”

Meanwhile in the US, Democratic Senator Mark Warner said in a statement that he has written to Alphabet, and to social platform Twitter, requesting more information on data sharing agreements with Chinese vendors.

Warner, vice chair of the US Intelligence Committee, said that since 2012 “the relationship between the Chinese Communist Party and equipment makers like Huawei and ZTE has been an area of national security concern.”

Warner said that he has asked Alphabet CEO Larry Page if the company has “third party partnerships” with ZTE, Lenovo, or TCL, and whether it conducts audits to ensure the proper treatment of consumer data.

Meanwhile, Twitter CEO Jack Dorsey was asked about relationships with Huawei, alongside the same the ones that Alphabet was asked about.

Alphabet has previously disclosed partnerships with mobile device makers including Huawei and Xiaomi, and with Chinese technology and investment giant, Tencent.

Chris Middleton: Chris Middleton is former editor of Internet of Business, and now a key contributor to the title. He specialises in robotics, AI, the IoT, blockchain, and technology strategy. He is also former editor of Computing, Computer Business Review, and Professional Outsourcing, among others, and is a contributing editor to Diginomica, Computing, and Hack & Craft News. Over the years, he has also written for Computer Weekly, The Guardian, The Times, PC World, I-CIO, V3, The Inquirer, and Blockchain News, among many others. He is an acknowledged robotics expert who has appeared on BBC TV and radio, ITN, and Talk Radio, and is probably the only tech journalist in the UK to own a number of humanoid robots, which he hires out to events, exhibitions, universities, and schools. Chris has also chaired conferences on robotics, AI, IoT investment, digital marketing, blockchain, and space technologies, and has spoken at numerous other events.
Related Post