Digital Catapult, the UK’s leading advanced digital technology innovation centre, has appointed the country’s first applied artificial intelligence (AI) ethics committee, in order to help guide the responsible development of AI applications in the UK.
The Machine Intelligence Garage Ethics Committee has been established to help define and apply ethical AI standards in practice, and will be working closely with Digital Catapult’s Machine Intelligence Garage incubator to help UK AI start-ups adhere to these principles.
The new committee is chaired by Luciano Floridi, Professor of Philosophy and Ethics of Information & Digital Ethics at Oxford University, and consists of a further 11 notable academics and AI professionals.
From principles to practicalities
The team is divided into the Steering Group, who will oversee the development of principles and tools to facilitate responsible AI in practice, and the Working Group, who will work closely with startups developing their propositions through Digital Catapult’s Machine Intelligence Garage programme.
While the programme itself provides access to expertise and computational power, the Working Group’s collaboration with Machine Intelligence Garage startups will ensure that the Committee’s work is tested and grounded in practice.
Commenting on the announcement, Dr Jeremy Silver, CEO of Digital Catapult said:
The role of the Machine Intelligence Garage Ethics Committee extends beyond regulation. This group of leading thinkers will be working hands-on with cohorts of AI companies to help ensure that the products and services they deliver have an ethical approach in their design and execution.
“A number of other organisations are also approaching these issues, notably the Ada Lovelace Institute and the Centre for Data Ethics and Innovation, with whom we look forward to collaborating.”
Dr Silver puts this desire for collaboration from such organisations down to Digital Catapult’s proximity to the ground, working with real companies to develop real machine learning and AI applications – addressing ethical issues as they go.
Committee Chairman Luciano Floridi added, “The development of AI is accelerating – and everyday we’re witnessing new proof of its huge potential. However, its development and applications have significant ethical implications, and we would be naive not to deal with them.
“I’m honoured to be leading such a noteworthy group to deliver a set of principles and tools to guide the ethical development and use of AI moving forward.”
The Machine Intelligence Garage Ethics Committee will now work on refining its guiding principles for responsible AI development, enabling companies to evaluate their work for risks, benefits, compliance with data and privacy legislation and social impact and inclusiveness, among other criteria. The first working principles are set to be delivered in September.
Internet of Business says
The appointment comes at a time when AI’s rapid evolution is raising ethical questions around suitable applications, data bias, security, and privacy.
The responsible use of algorithms and data is paramount for the sustainable development of machine intelligence applications, as concluded by the recent House of Lords Artificial Intelligence Committee report.
However, at present, there is a gap between theory and practice, between the ‘what’ of responsible AI and the ‘how’. There is demand from all sizes of organisation for help on defining and applying ethical standards in practice.
With their close collaboration with AI developers, Digital Catapult is well placed to tackle this practical hurdle, particularly with Machine Intelligence Garage Ethics Committee’s intellectual and professional pedigree.
Meanwhile, Google has developed an ethical AI strategy of its own.