Ban UK police use of facial-recognition, House of Lords told
UK police continue to deploy facial-recognition technology disproportionately with no clear legal basis and highly questionable efficacy, according to expert witnesses at a House of Lords inquiry.
In evidence given to the Lords Home Affairs and Justice Committee about the use of advanced algorithmic tools by law enforcement, experts called into question the proportionality and efficacy of how facial-recognition technology has been deployed by the Metropolitan Police Service (MPS) and South Wales Police (SWP).
Silkie Carlo, director of civil liberties campaign group Big Brother Watch, said that in five years the MPS had only achieved 11 positive matches using live facial-recognition (LFR) technology, following trial deployments which began in 2016 at the Notting Hill Carnival and ended in February 2019 with two deployments in Romford, before fully operational use began in January 2020.
“In that time, they have scanned hundreds of thousands of people on the streets of London, and created a lot of distrust among communities, particularly by deploying repeatedly at Notting Hill Carnival – I think there’s inevitably a racialised element to that – and by deploying repeatedly in the borough of Newham as well, which is the most diverse borough of London,” she said.
“Not only have they only had 11 true matches, they have generated an awful lot of false positive matches. Their current rate over the entirety of the deployments is 93% false positive matches, so I struggle to see a world in which that could be classed as anything near proportionate.”
On the MPS’ LFR trials, Karen Yeung – an Interdisciplinary Professorial Fellow in Law, Ethics and Informatics at Birmingham Law School – described the force’s scientific methodology as “very unrigorous”, noting that because procedures were tweaked every time a trial was conducted, “we do not have a stable and rigorous set of data on the basis of these experiments”.
She added: “In those 11 trials, 500,000 faces were scanned to produce nine to 10 arrests, and many of those were individuals who were wanted for very trivial offences. All of this means the real-time location tracking of many, many hundreds of thousands of British people going about their lawful business, not bothering anyone.
“This is a serious reversal of the presumption that one is entitled to go about their business in a lawful manner, without being disturbed by the state, and I completely support Silkie’s view that this should be subject to very, very stringent regulations, if not an outright ban.”
Yeung further noted that, unlike LFR trials conducted in Germany and France, the MPS tested the technology on real-life suspects.
“In other Western European countries, they have been using volunteers to test the accuracy of this data, and they have a full database of the people passing in front of the cameras – this has not been the case in London, they have been doing operational trials,” she said.
She added that while the MPS have purported to comply with data protection legislation, the documents that have been published so far “are seriously deficient, in my view, in terms of the extent to which they declared operational purposes, and the question of impact evaluation and proportionality”.
Yeung said that any conclusion the MPS’ LFR experiments were successful is not sustained by the available evidence.
A questionable legal basis
In terms of the legal basis UK police use to justify their facial-recognition deployments, Carlo echoed the UK’s former biometric commissioner’s call for an explicit legal framework, noting there is currently no specific legislation governing the technology’s use, and that police claim the “backdrop of common law, Human Rights Act and Data Protection Act” enables them to use it.
In response to the Science and Technology Committee’s July 2019 report, which called for a moratorium on police use of LFR until a proper legal framework was in place, the government claimed in March 2021 – after a delay of nearly two years – that there was “already a comprehensive legal framework for the management of biometrics, including facial recognition”.
The government said this framework included police common law powers to prevent and detect crime, the Data Protection Act 2018 (DPA), the Human Rights Act 1998, the Equality Act 2010, the Police and Criminal Evidence Act 1984 (PACE), the Protection of Freedoms Act 2012 (POFA), and police forces’ own published policies.
Carlo said that in terms of retrospective facial-recognition (RFR), which the MPS is expected to be deploying a new system for in the next three months, “it is in a complete lacuna of regulation and safeguards yet again…you could use this with body-worn cameras, you could use it with CCTV – the possibilities are significant and really endless…it goes as far as the imagination stretches.”
“I think there should be a moratorium on the retrospective facial-recognition technology that police forces are acquiring now, which allows them not just to compare one isolated image against the custody image database, but effectively allows them to do any sort of facial-recognition matching with footage against potentially any type of database; that’s a much more expansive type of use of the technology.”
A solution without a problem
According to Yeung, a key issue with police deployments of new technologies – including facial-recognition and algorithmic crime “prediction” tools such as the MPS’ Gangs Matrix or Durham Constabulary’s Harm Assessment Risk Tool (Hart) – is that authorities have started using them “just because we can…without clear evidence” about their efficacy or impacts.
As with facial-recognition tech, Yeung said the development of crime prediction tools has been equally unrigorous, with historic arrest data being used a proxy for who is likely to commit a crime.
“Just because someone is arrested does not mean that they’re charged, let alone convicted, and there are all those crimes for which we have no arrests at all,” she said. “And yet, these tools are being used in Britain on the basis that they generate predictions about recidivism – we should at least be labelling them as re-arrest predictors.”
Yeung further noted that the use of such technologies by police has the potential to massively entrench existing power discrepancies within society, as “the reality is we’ve tended to use the historic data that we have, and we have data in the masses, mostly about people from lower socio-economic backgrounds”.
“We’re not building criminal risk assessment tools to identify insider trading, or who’s going to commit the next kind of corporate fraud because we’re not looking for those kinds of crimes,” Yeung added.
“This is really pernicious – what is going on is that we are looking at high volume data, which is mostly about poor people, and we are turning them into prediction tools about poor people, and we are leaving whole swathes of society untouched by these tools.
“This is a serious systemic problem and we need to be asking those questions. Why are we not collecting data which is perfectly possible now about individual police behaviour? We might have tracked down rogue individuals who are prone to committing violence against women. We have the technology, we just don’t have the political will to apply them to scrutinise the exercise of public authority.”
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.