The police should make wider use of AI to prevent themselves from being overwhelmed by the amounts of data now involved in investigations, according to Sara Thornton, chair of the UK’s National Police Chiefs’ Council.
Quoted in The Guardian, Thornton suggested that the use of AI has already led to several rape prosecutions being dropped, after mistakes were made during earlier stages of the investigations. For example, police had failed to hand over evidence that undermined their cases, despite having a duty to pursue all reasonable lines of enquiry, both for and against a conviction.
Although there have been suggestions of a “cultural problem” within the police with regards to disclosure, Thornton suggested that the key issue was the overwhelming amounts of data that forces now have to plough through.
“What we are really challenged by is the volume of data which all of us hold in 2018, and therefore the potential for many, many more reasonable lines of enquiry than was ever the case [before],” she said.
“I’m not just talking about twice as many […] the numbers that we’re talking about are really significant,” she added.
Pounding the data beat
With the boom in smartphones, mobility, connected devices, and social platforms, both suspects and complainants should be asked at the outset of any investigation if there is evidence on their phones or online, according to Thornton.
But the police would need significant extra help to gather and analyse all the additional data involved, she said, and this is where AI could prove to be invaluable.
“I think the challenge for us is how we can use technology more, beyond search terms. So how can you use […] machine learning, artificial intelligence, whatever phrase you want to use, to get clever tech to help us to do this?”
Thornton said that new technologies were already being used in civil cases, and the Crown Prosecution Service has set up a working group to look at how it could be deployed in criminal trials.
Police forces in the UK have already been looking at the potential of AI in various applications.
Last year, a BBC report suggested that police in Durham, England, would be using predictive AI to determine whether suspects should be kept in custody. Meanwhile, the Metropolitan Police plans to use AI to scan for images of child abuse on suspects’ devices, so that officers are no longer subject to the psychological trauma involved. Meanwhile, in China, police are using smart glasses to identify potential suspects.
Internet of Business says
As connected technology spreads, it stands to reason that the pressure on law enforcement agencies can only grow in line with the amounts of data that may now be involved in investigations. That’s as much a financial resource issue as it is a time and technology one. The use of AI in smart pattern recognition is well established, and a future in helping all organisations to sift through unstructured data seems both welcome and inevitable.
However, the use of AI in law enforcement remains controversial, primarily due to the high risk of automating discrimination and/or confirmation bias – a problem often rooted in flawed training data, as would appear to be the case with the COMPAS algorithm, for example, which is used in sentencing advice in the US. COMPAS has been reported to be replicating longstanding institutional bias against black Americans, partly because that bias exists in the data gathered from years of legal precedent.
And it must be said that the risk of AI replicating human bias is not limited to legal applications, as this report (by IoB editor Chris Middleton) explains. More, if AI can be used to detect crimes such as fraud, for example – as it is in the financial services sector – then, logically, it could also be used to commit them. Either way, the unintended consequences of errors in AI’s legal applications may simply be more serious, and harder to resolve. In this regard, Thornton’s description of “clever tech” and “whatever phrase you want to use” should be a cause for concern.