Special Report: UK AI policy – Why the government must modernise first

THE BIG READ In the first of two detailed reports on the UK government’s AI strategy, Chris Middleton looks at how Whitehall aims to control AI, ensure its fair use, and put controls in place to make sure that everyone benefits from the technology.

2018 has been the year of artificial intelligence for the British government, with a new Office for AI, a new Sector Deal between government, companies, and academic bodies, and a range of new institutions, such as the Centre for Data Ethics and Innovation.

In April, the House of Lords Artificial Intelligence Select Committee produced its own report on the nation’s ambitions to lead in ethical AI. The government has issued a 41-page response to that in recent days, via the Department for Business, Innovation and Skills (BIS).

That the response came from BIS – rather than, say, than the Department for Digital, Culture, Media, and Sport (DCMS) – reveals one of the biggest challenges facing the UK at a crunch time for the nation. Government’s management of AI policy, and other technologies such as robotics and autonomous systems, is diffuse, spread thinly among a confusing mix of different departments and briefs.

The new Office for AI, for example, is jointly run between DCMS and the Department for Business, Energy, and Industrial Strategy (BEIS). And yet the official policy response came from neither of them, but instead from the wing of government whose focus is on nurturing skills. It’s bizarre.

The UK’s stated ambitions to be a world leader in AI are clear, and backed by modest investment in global terms. But as UK businesses and the country’s European and global partners cast around for clarity, focus, and guidance about the UK’s future position on the world stage, the serpentine structure of the administration itself acts against the national interests.

So what of the government’s response itself, regardless of which department it came from?

Controlling the narrative

After welcoming the Select Committee’s report and restating Whitehall’s ambitions for the AI sector, it’s interesting that the government’s first set of recommendations is about controlling the narrative.

“The media provides extensive and important coverage of artificial intelligence, which occasionally can be sensationalist,” notes the paper. “It is not for the government or other public organisations to intervene directly in how AI is reported, nor to attempt to promote an entirely positive view among the general public of its possible implications or impact.

“Instead, the government must understand the need to build public trust and confidence in how to use artificial intelligence, as well as explain the risks.

“The government understands that to successfully address the Grand Challenge on AI and Data outlined in the Industrial Strategy white paper, it is critical that trust is engendered through the actions government takes and the institutions it creates.

“Working towards a more constructive narrative around AI will harness and build on work already underway through the government’s Digital Charter. Through the Charter, we aim to ensure new technologies, such as AI, work for the benefit of everyone – all citizens and all businesses – in the UK.”

A policy of indirect intervention, perhaps.

But again, the criticism stands that if the government is serious about constructing a more coherent narrative on AI – to counter the tabloids’ unhealthy fixation on Terminators, mass unemployment, malignant AI, and terrorist drones – then simplifying how it manages the brief internally would be the best starting point.

The government needs a convincing, informed digital champion who understands both the business application and the social impact. It doesn’t have one in its current mix of competent administrators (but poor communicators), and ministers who would turn up to the opening of an envelope.

For example, DCMS has long been an embarrassing collision of competing priorities: not for nothing was it satirised in BBC comedy W1A as the ‘Department for Digital, Culture, Media, and for some reason also Sport’.

The government should fold all of its technology responsibilities into a single, laser-focused department, and create an ambassadorial relationship between that and other bodies, such as BEIS and BIS – two acronyms that are themselves confusing.

For example, why is industrial strategy managed by a separate department to skills? And why is responsibility for AI shared across at least three different departments, and dozens of similar organisations? None of the current management or governance structure makes much sense, and it needs urgent review and renewal.

Everyday engagement

Next the paper moves onto what it calls “everyday engagement with AI”, which is where the government’s response becomes more focused and interesting.

“It is important that members of the public are aware of how and when artificial intelligence is being used to make decisions about them, and what implications this will have for them personally,” it says.

“Industry should take the lead in establishing voluntary mechanisms for informing the public when artificial intelligence is being used for significant or sensitive decisions in relation to consumers. […] The soon-to-be established AI Council, the proposed industry body for AI, should consider how best to develop and introduce these mechanisms.”

Another day, another AI body to add to an infinitely expanding list, it seems: a recipe for decisive action getting lost in a complex, slow-moving bureaucracy. No wonder the government struggles to manage new technology implementations.

But while acknowledging that GDPR/the Data Protection Act allows for automated processing and analysis (in the government’s estimation), the paper notes that “individuals should not be subject to a decision based solely on automated processing, if that decision significantly and adversely impacts them, either legally or otherwise, unless required by law.

“If a decision based solely on automated processing is required by law, the Act specifies safeguards that controllers should apply to ensure the impact on the individual is minimised. This includes informing the data subject that a decision has been taken and provides them with 21 days to ask the controller to reconsider the decision, or retake the decision with human intervention.

“Informing the public of how and when AI is being used to make decisions about them, and what implications this will have for them personally, will be raised with the new Artificial Intelligence Council.”

By effectively introducing a citizens’ right of appeal – something that Internet of Business strongly supports – the government is responding to criticisms that automated systems risk making decisions that are as inscrutable as the workings of Whitehall itself.

However, the extent to which human agents would have any real power to intervene or change a decision is unknown, given that – as retail banking systems have shown – they may have little room for manoeuvre, if algorithms are merely enforcing policy or designed to support strict spending controls.

Plus, 21 days isn’t long enough to appeal against a decision that could affect someone’s entire life or finances. It seems an arbitrary time period.

Data trust and openness

Next, the government indicates that it plans to adopt the Hall-Pesenti Review recommendation that Data Trusts be established to facilitate the ethical sharing of data between organisations.

“However, under the current proposals, individuals who have their personal data contained within these Trusts would have no means by which they could make their views heard, or shape the decisions of these trusts,” says the paper, continuing the citizen-centric theme.

“We therefore recommend that as Data Trusts are developed under the guidance of the Centre for Data Ethics and Innovation, provision should be made for the representation of people whose data is stored, whether this be via processes of regular consultation, personal data representatives, or other means.”

Access to data is essential to the present surge in AI technology, notes the paper, adding that there are “many arguments to be made” for opening up data sources, especially in the public sector, in a fair and ethical way.

“Many SMEs in particular are struggling to gain access to large, high-quality datasets,” it says, “making it difficult for them to compete with the large, mostly US-owned, technology companies, who can purchase data more easily and are also large enough to generate their own.”

This is where the paper strays into controversial territory, while at the same time making a widely accepted point: open data is a good thing, in terms of making communities smarter and more efficient. However, one of the most useful data sets would inevitably come from the NHS.

“In many cases, public datasets, such as those held by the NHS, are more likely to contain data on more diverse populations than their private sector equivalents,” says the paper.

“We acknowledge that open data cannot be the last word in making data more widely available and usable, and can often be too blunt an instrument for facilitating the sharing of more sensitive or valuable data.

“Legal and technical mechanisms for strengthening personal control over data, and preserving privacy, will become increasingly important as AI becomes more widespread through society.”

Banking on data

Perhaps unsurprisingly for an administration that is so closely tied to the City and to free-market manoeuvres, the government suggests the Open Banking initiative as a model for other public data sets. Or perhaps that’s merely an acknowledgement that data is the de facto currency of our age.

“Mechanisms for enabling individual data portability, such as the Open Banking initiative, and data sharing concepts such as Data Trusts, will spur the creation of other innovative and context-appropriate tools, eventually forming a broad spectrum of options between total data openness and total data privacy,” says the government.

“We recommend that the Centre for Data Ethics and Innovation investigate the Open Banking model, and other data portability initiatives, as a matter of urgency, with a view to establishing similar standardised frameworks for the secure sharing of personal data beyond finance.”

Tackling bias

But what about the critical questions of transparency and bias in AI systems – something that are as likely to afflict the banking sector as any other?

The government accepts that achieving full technical transparency is difficult, and perhaps even impossible, in certain kinds of AI systems – presumably referring to neural nets and so-called ‘black box’ solutions.

However, “there will be particular safety-critical scenarios where technical transparency is imperative, and regulators in those domains must have the power to mandate the use of more transparent forms of AI, even at the potential expense of power and accuracy,” says the paper. An interesting recommendation.

“We believe that the development of intelligible AI systems is a fundamental necessity if AI is to become an integral and trusted tool in our society.”

The government acknowledges, too, that bias is a real danger in a data-fuelled and AI-enhanced world. “We are concerned that many of the data sets currently being used to train AI systems are poorly representative of the wider population, and AI systems which learn from this data may well make unfair decisions which reflect the wider prejudices of societies past and present,” it says.

“While many researchers, organisations, and companies developing AI are aware of these issues, and are starting to take measures to address them, more needs to be done to ensure that data is truly representative of diverse populations, and does not further perpetuate societal inequalities.”

However, one of the challenges facing the UK and other countries, is that robotics and AI may themselves create social divisions and inequality, largely because many people will not have the skills to flourish in a world in which some tasks are augmented, and others are replaced – as Internet of Business explored in its recent report on future workforces.

“Researchers and developers need a more developed understanding of these issues,” continues the paper. “In particular, they need to ensure that data is preprocessed to ensure it is balanced and representative wherever possible, that their teams are diverse and representative of wider society, and that the production of data engages all parts of society.

“Alongside questions of data bias, researchers and developers need to consider biases embedded in the algorithms themselves – human developers set the parameters for machine learning algorithms, and the choices they make will intrinsically reflect the developers’ beliefs, assumptions, and prejudices.”

Accordingly, the government recommends that “a specific challenge be established within the Industrial Strategy Challenge Fund to stimulate the creation of authoritative tools and systems for auditing and testing training datasets, to ensure they are representative of diverse populations, and to ensure that when used to train AI systems they are unlikely to lead to prejudicial decisions.”

This challenge should be established immediately, it says.

Internet of Business says

A welcome and forward-looking paper, which goes on to discuss investment, skills, commercialising the technology, and a range of other related issues, which we will explore in a follow-up report in the near future.

However, the first step the UK government should take in clarifying its approach to AI, robotics, the IoT, and digital transformation, is to recognise that its own internal complexity on these issues is unhelpful, and unsuited for purpose.

But that is not to fault its ambitions.

Editor’s note: On 9 July, secretary of state for digital, culture, media, and sport Matt Hancock became health secretary, and was replaced by Jeremy Wright, an MP who has no knowledge or, or track record in, technology and digital affairs. His Twitter account is inactive, and according to Parliamentary records, he has only uttered the word ‘digital’ twice in 15 years.

 

Chris Middleton: Chris Middleton is former editor of Internet of Business, and now a key contributor to the title. He specialises in robotics, AI, the IoT, blockchain, and technology strategy. He is also former editor of Computing, Computer Business Review, and Professional Outsourcing, among others, and is a contributing editor to Diginomica, Computing, and Hack & Craft News. Over the years, he has also written for Computer Weekly, The Guardian, The Times, PC World, I-CIO, V3, The Inquirer, and Blockchain News, among many others. He is an acknowledged robotics expert who has appeared on BBC TV and radio, ITN, and Talk Radio, and is probably the only tech journalist in the UK to own a number of humanoid robots, which he hires out to events, exhibitions, universities, and schools. Chris has also chaired conferences on robotics, AI, IoT investment, digital marketing, blockchain, and space technologies, and has spoken at numerous other events.
Related Post