AI regulation & ethics: How to build more human-focused AI

As debates rage across the world about the growing impact of AI, data analytics, and autonomous systems, Joanna Goodman was invited to sit in on an all-party Parliamentary panel of experts. So what are the answers? 

“However autonomous our technology becomes, its impact on the world – for better or worse – will always be our responsibility.” Those are the words of Professor Fei-Fei Li, director of the Stanford Artificial Intelligence Lab, and chief scientist for AI research at Google Cloud.

Professor Li’s vision of “human-centred AI” was reflected in the third evidence session of the all-party parliamentary group on AI (APPG) at the House of Lords this month. It considered ethics and accountability in the context of managing and regulating AI, as the technology moves into more and more aspects of our lives. The UK government also established an Office for AI earlier this year.

Since then, we have seen the Cambridge Analytica Facebook ‘breach’ unfold, while a driverless Uber car killed a pedestrian in Arizona, where autonomous vehicles are being tested on public roads. These and other stories – such as the problem of bias entering some AI systems – have led to more calls for vigilance and tighter regulation.

But what does that actually mean?

Read more: Cambridge Analytica vs Facebook: Why AI laws are inadequate

Read more: Uber halts self-driving car tests after pedestrian is killed

The APPG considered three questions about AI and human responsibility:

• How do we make ethics part of business decision-making processes?
• How do we assign responsibility for algorithms?
• What auditing bodies can monitor the ecosystem?

Tracey Groves, founder and director of Intelligent Ethics – an organisation dedicated to optimising ethical performance in business – discussed the importance of education, empowerment, and excellence in relation to AI, and suggested the following approaches to achieving all three.

Education, empowerment, excellence

Education is about leadership development, mentoring, and coaching, she said, and about awareness training to promote the importance of ethical decision-making.

Empowerment involves building a trustworthy culture, by aligning an organisation’s values with its strategic goals and objectives, and establishing “intelligent accountability”.

Finally, achieving excellence means identifying the key performance indicators of ethical conduct and culture, she said, and then monitoring progress and actively measuring performance.

Groves highlighted inclusivity as a critical success factor in ethical decision-making, along with giving people the ability to seek legal redress when AI gets things wrong.

Finally, she emphasised that managing risks associated with AI software is not just the responsibility of government and regulation; all businesses need to establish ethical values that can be measured, she said. Regulation will require businesses to be accountable, she added, and – potentially – will penalise them if they are not.

Building responsibility

Aldous Birchall, head of financial services AI at PwC, focused on the topic of machine learning. He advocated building responsibility into AI software, and developing common standards and sensible regulations.

Machine learning moves software to the heart of the business, he explained. AI presents exciting new opportunities, which tech companies pursue with the best intentions, but insufficient thought is given to the societal impact.

“Engineers focus on outcomes and businesses focus on decisions,” he said, adding that machine learning and AI training should include ethics and a clear understanding of how algorithms impact society.

Some companies may appoint an ethics committee, he said, while others may introduce new designations or roles to manage risk, and risk awareness. The scalability of software systems means that problems can escalate quickly too, he added.

Birchall believes that assigning human responsibility for algorithms, if AI goes wrong or is applied incorrectly or inappropriately, must be about establishing a chain of causality. Ownership brings responsibility, he said.

Birchall suggested that something like an MOT for autonomous vehicles could be a workable solution. AI use cases are narrow, as algorithms handle a well-defined set of tasks, he added.

Monitoring and regulation need to be industry specific, he concluded. For example, financial services AI and healthcare AI raise completely different issues and therefore require different safeguards.

Regulating AI

Birchall offered four suggestions for how AI might be regulated:-

• Adapt engineering standards to AI
• Train AI engineers about risk
• Engage and train organisations to consider the risks, as well as the benefits
• Give existing regulatory bodies a remit over AI too.

Robbie Stamp, chief executive at strategic consultancy Bioss International, reminded the APPG that AI cannot be ethical in itself because it does not have “skin in the game”. Ethical AI governance is all about human accountability, he said.

“As we navigate emergence and uncertainty, governance should be based on understanding key boundaries in relation to the work we ask AI to do, rather than on hard and fast rules,” said Stamp. He flagged up the Bioss AI Protocol, an ethical governance framework that tracks the evolving relationship between human and machine judgement and decision-making.

Automation compromises data quality

Sofia Olhede, director of UCL’s Centre for Data Science, highlighted how automated data collection compromises data quality and validity, leading to biased algorithmic decision-making.

Most algorithms are developed to deliver average outcomes, she said. These may be sufficient in some contexts – such as for making purchasing recommendations – but they may be completely inadequate when the consequences are life-changing or business-critical.

“Algorithmic bias threatens AI credibility and fuels inequalities,” said Olhede, adding that because algorithms learn from the data they have been exposed to, they reflect any human and/or historical bias in that data. And if data is collected ubiquitously, its biases may not reflect societal norms. Therefore, it is important to establish standards for data curation.

Otherwise, for example, a potential bias in favour of those who adopt technology – and therefore produce more data – may impact negatively on other groups, such as the elderly or anyone who makes minimal use of digital systems.

On the subject of ethics, Olhede expressed her hopes for standards-setting. “Many companies are establishing internal ethics boards, but rather than having these spring up like mushrooms, we need common principles about their purpose,” she said.

Achievements versus risks

Tom Morrison-Bell, government affairs manager at Microsoft, highlighted the achievements and potential of AI technology. For example, Microsoft’s Seeing AI app helps visually impaired people to manage human interactions by describing people and reading expressions.

However, he doesn’t underestimate the ethical risks: “Whatever the benefits and opportunities of AI, if the public don’t trust it, it’s not going to happen,” he said.

The debate moved on to whether algorithmic transparency would provide greater reassurance and encourage public trust. “Most companies are working to become more transparent. They don’t want AI black boxes,” said Birchall.

“If an algorithm leads to a decision being made about someone, they have a right to an explanation. But what do we mean by an explanation?” asked Olhede, adding that not all algorithms are easily explainable or understood.

Internet of Business says

This, then, is the critical problem. So the underlying question is: how much transparency and control is required to establish trustworthy AI?

As Groves observed, it is possible to trust technology without knowing exactly how it works. As a result, most people need to understand the implications of AI and algorithms rather than the technology itself – rather than whatever is in the black box. They need to be aware of the potential risks and understand what those mean for them.

This is particularly critical when even scientists and developers in the field don’t understand how some black-box neural networks have arrived at decisions – according to a UK-RAS presentation at UK Robotics Week last year.

Professor Gillian Hadfield, author of Rules for a Flat World: Why Humans Invented Law and How to Reinvent It for a Complex Global Economy, believes we may simply be asking the wrong questions.

“How do we build AI that’s safe and valuable and reflects societal norms, rather than exposing patterns of behaviour?” she asks. “Perhaps instead of discussing what AI should be allowed to do, we should involve social scientists in considering how to build AI that can understand and participate in our rules.”

• The debate took place in a private committee room in Parliament on 12 March 2018.
• On 29 March, 2018, the UK government announced the foundation of the Ada Lovelace Institute, which is intended to focus national debate on ethical computing and AI.

Joanna Goodman is a freelance journalist who writes about business and technology for national publications, including The Guardian newspaper and the Law Society Gazette, where she is IT columnist. Her book Robots in Law: How Artificial Intelligence is Transforming Legal Services was published in 2016.

More from Joanna Goodman on Internet of Business:

Read more: Women in tech: the £150bn advantage of increasing diversity

Read more: Women in AI & IoT: Why it’s vital to Re•Work the gender balance

Joanna Goodman:
Related Post