Internet of Business says
The UK government has announced a new code of conduct covering the use of artificial intelligence (AI) in the NHS and healthcare sector.
The 10-point Code of Conduct for Data-driven Health and Care Technology is designed, says the government, to ensure that NHS patients benefit from digital technology. However, the separate commentary makes clear that it applies to a diverse and fragmented care sector.
Earlier this year, the Department for Digital, Culture, Media and Sport (DCMS) published a Data Ethics Framework, to guide the design of appropriate data use in government.
The new AI code of conduct is designed to complement that framework by setting out what government expects from the IT industry when engaging with the NHS – and how Whitehall wants the healthcare system to engage in return.
Lightning speed
Launching the code, health minister Lord O’Shaughnessy said, “Artificial intelligence and machine learning is a field that is moving at lightning speed and has tremendous potential across the healthcare sector.
“That is why I am pleased to announce that we have today launched our initial technology partnerships code of conduct – 10 principles which set out the rules of engagement between industry and the health and care system. These principles provide a basis to deepen the trust between patients, clinicians, researchers and innovators.
“This is an important first step towards creating a safe and trusted environment in which innovation can flourish to the benefit of all our health.”
The 10-point code
Designed to be read by AI and technology providers to the NHS, along with expert procurers, the code provides 10 guiding principles. These are:-
1. Define the user
Understand and show who specifically your product is for, what problem you are trying to solve for them, what benefits they can expect and, if relevant, how AI can offer a more efficient or quality-based solution.
Ensure you have done adequate background research into the nature of their needs, including any co-morbidities and socio-cultural influences that might affect uptake, adoption and ongoing use.
2. Define the value proposition
Show why the innovation or technology has been developed or is in development, with a clear business case highlighting outputs, outcomes, benefits, and performance indicators, and how exactly the product will result in better provision and/or outcomes for people and the health and care system.
3. Be fair, transparent and accountable about what data you are using
Show you have utilised privacy-by-design principles with data-sharing agreements, data flow maps, and data protection impact assessments.
Ensure all aspects of GDPR have been considered (the legal basis for processing, information asset ownership, system level security policy, duty of transparency notice, unified register of assets completion, and data privacy impact assessments).
4. Use data that is proportionate to the identified user need (the data minimisation principle of GDPR)
Show that you have used the minimum personal data necessary to achieve the desired outcomes of the user need identified in point 1.
5. Make use of open standards
Utilise and build into your product or innovation, current data and interoperability standards to ensure you can communicate easily with existing national systems. Programmatically build data quality evaluation into AI development so that harm does not occur if poor data quality creeps in.
6. Be transparent about the limitations of the data used and algorithms deployed
Show you understand the quality of the data and have considered its limitations when assessing if it is appropriate to use for the defined user need. When building an algorithm, be clear on its strengths and limitations, and show in a transparent manner if it is your training or deployment algorithms that you have published.
7. Make security integral to the design
Keep systems safe by integrating appropriate levels of security and safeguarding data.
8. Define the commercial strategy
Purchasing strategies should show consideration of commercial and technology aspects and contractual limitations. You should only enter into commercial terms in which the benefits of the partnerships between technology companies and health and care providers are shared fairly.
9. Show evidence of effectiveness for the intended use
You should provide evidence of how effective your product or innovation is for its intended use. If you are unable to show evidence, you should draw a plan that addresses the minimum required level of evidence given the functions performed by your technology.
10. Show what type of algorithm you are building
Along with the evidence base for choosing that algorithm, how you plan to monitor its performance on an ongoing basis, and how you are validating that performance.
Show the learning methodology of the algorithm you are building, if it falls into a higher-risk categorisation as shown in principle 9. Aim to show in a clear and transparent way how you are validating the outcomes.
An ongoing process
The code is a work in progress and open to comment from healthcare practitioners and technology suppliers. The government aims to publish an updated version in December, after seeking feedback from the market.
So what’s behind the move, and why is it so important?
Artificial intelligence is sweeping into the healthcare sector and promises to bring a range of benefits for all – among them faster and better diagnoses, more personalised care, preventative medicine, and reduced costs.
However, there are numerous risks and challenges associated with the sudden influx of new technologies, including a tension between AI systems’ need for data and the NHS’ need to protect patient confidentiality.
This gives rise to two challenges. First, from a technical standpoint, data should be defined and structured in accordance with agreed interoperability standards. And second, people must be able to trust that their data is being used appropriately, safely, and securely.
There are safeguards in place for this, including the National Data Guardian’s 10 data security standards and the national data opt-out. This allows people to opt out of their data being used for purposes beyond their individual care.
Other challenges
Other problems with the rapid influx of AI into healthcare include: the risk of bias entering the system; the potential for some AI suppliers to attain a monopoly position in the market – and over patient data; and the risk over reliance on small numbers of large suppliers.
The latter problem has dogged the public sector for years, despite initiatives such as the G-Cloud that were designed to open up the market to innovative SMEs.
These challenges affect commissioners, healthcare providers, and suppliers to varying degrees. As a result, Whitehall is developing the code of conduct to clarify what central government is doing to help navigate and manage these issues, what it expects from technology suppliers, and how policymakers will manage liability and accountability.
Commercial arrangements for partnerships in developing, testing and implementing new technologies also need to be mutually beneficial, says the government. But this is complicated and requires the market to consider issues such as value, scope, regulatory compliance, transparency, liability, accountability, and ownership of intellectual property.
Meanwhile, health and care contracting, procurement and commissioning processes are not well-suited to adopting rapidly developing and evolving technology.
On the one hand, the government wants to open up the market to SMEs and innovative startups, but on the other this creates a fragmented market within another fragmented market (healthcare) that procurement professionals may find tough to navigate.
Support for innovators is itself unbalanced in the UK, with a heavy focus on research, development, and testing, but with little or no support for scaling up successful products and businesses – at least within the UK’s conservative, risk-averse investment culture.
Small-scale angel investment and large private equity funds are relatively easy to find in the UK, but there is a massive gap in the middle in funding terms – a gap that Silicon Valley investors are adept at filling with their more risk-tolerant, portfolio-based approach.
The G-Cloud programme was an exemplar of the UK’s thwarted ambition to encourage and support innovative SMEs. Despite sincere attempts to flatten the purchasing market, what former Cabinet Office minister Frances Maude once called the ‘oligopoly’ of large IT suppliers, systems integrators, and consultancies maintained their hold on government procurement.
This perpetuated the longstanding problems with public sector IT programmes: most are too big, too complex, too bureaucratic, too cosy, too slow, and too expensive, often resulting in vast overspends or failed programmes – all in an environment of constantly changing political hues.
Meanwhile, multiple regulators mean that products’ lifecycles are often unclear. There are overlaps and some products may need to be registered with multiple regulators.
“As a result, suppliers may not receive a coherent demand signal. This could make it difficult for UK-based suppliers to secure private investment,” admits the government.
These issues aside, the new 10-point code is an excellent first step in managing the market, especially with its commitments to security by design, transparency, and open standards.
But whether it will be enough to prevent the NHS and the wider healthcare sector from falling into the hands of a handful of large suppliers – perhaps with a core business in advertising sales – remains to be seen.
- Read more: Google announces new ethical AI strategy
- Read more: Special Report: UK AI policy – Why the government must modernise first
- Read more: UK launches “£1 billion” Sector Deal for AI | Analysis
- Read more: Europe announces €20 billion AI strategy – UK sidelined | Analysis
- Read more: UK can lead world in ethical AI say Lords | Analysis