Site icon TechNewsBoy.com

Met Police ramp up facial recognition despite ongoing concerns

The Metropolitan Police Service (MPS) is ramping up its deployments of live facial recognition (LFR), despite ongoing concerns about the proportionality and necessity of the technology, as well as its impact on vulnerable or marginalised communities.   

Over the course of 2022, the MPS has deployed the technology six times: once in Leicester Square, once in Piccadilly Circus and four times in Oxford Circus. These are the first deployments since February 2020, when the use of LFR was paused during the pandemic, with four of the deployments taking place in July 2022 alone.

While roughly 144,366 people’s biometric information has been scanned over the course of these deployments, only eight were arrested, for offences including possession of Class A drugs with intent to supply, assaulting an emergency worker, failures to appear in court, and an unspecified traffic offence.

All suspects were engaged and detained by officers following alerts from the vehicle-mounted LFR system, which enables police to identify people in real time by scanning their faces and matching them against a database of facial images, or “watchlist”, as they walk by.

However, based on the gulf between the number of people scanned and the number of arrests made, as well as the content of answers provided to Computer Weekly by the MPS about its deployments, civil society groups, lawyers and politicians have condemned the force’s approach to LFR as fundamentally flawed and “irresponsible”.

Competing views

Although both Parliament and civil society have repeatedly called for new legal frameworks to govern law enforcement’s use of biometrics – including a House of Lords inquiry into police use of advanced algorithmic technologies; the UK’s former ciometrics commissioner, Paul Wiles; an independent legal review by Matthew Ryder QC; the UK’s Equalities and Human Rights Commission; and the House of Commons Science and Technology Committee, which called for a moratorium on LFR as far back as July 2019.

But the government claims there is “already a comprehensive framework” in place.

In January 2022, policing minister Kit Malthouse also said there is already a strong framework in place, adding that any new policing tech should be tested in court, rather than legislated for, on the basis that new laws would “stifle innovation”.

In response to Computer Weekly’s questions about its deployments, and whether it would consider a halting its use of LFR until a proper framework was in place, the MPS said its use of the technology “has seen numerous individuals arrested now for violent and other serious offences. It is an operational tactic which helps keep Londoners safe, and reflects our obligations to Londoners to prevent and detect crime.”

Speaking with Computer Weekly, London Assembly member Caroline Russell, who is leader of the Greens and sits on the Police Committee, said there needs to be certainty that “all the proper safeguards are in place before the technology is deployed”, adding that “it’s irresponsible to be using it when there are such widely known and worrying flaws in the way that it works”.

Russell acknowledges that there are exceptional circumstances in which LFR could be reasonably deployed – for instance, under the threat of an imminent terrorist attack – but says the technology is ripe for abuse, especially in the context of poor governance combining with concerns over the MPS’s internal culture raised by the policing inspectorate, which made the “unprecedented” decision to place the force on “special measures” in June 2022 over a litany of systemic failings.

“While there are many police officers who have public service rippled through them, we have also seen over these last months and years of revelations about what’s been going on in the Met, that there are officers who are racist, who have been behaving in ways that are completely inappropriate, with images [and] WhatsApp messages being shared that are racist, misogynist, sexist and homophobic,” she said, adding that the prevalence of such officers continuing to operate unidentified adds to the risks of the technology being abused when it is deployed.

Others, however, are of the view that the technology should be completely banned. Megan Goulding, a lawyer at human rights group Liberty, for example, told Computer Weekly: “We should all be able to walk our streets and public spaces without the threat of being watched, tracked and monitored. Facial recognition technology is a discriminatory and oppressive surveillance tool that completely undermines this ideal.

“Just two years ago in our landmark legal case, the courts agreed this technology violates our rights and threatens our liberties. This expansion of mass surveillance tools has no place on the streets of a rights-respecting democracy.”

She added that instead of actually making people safer, LFR technology will only entrench existing patterns of discrimination and sow division. “History tells us surveillance technology will always be disproportionately used on communities of colour and, at a time when racism in UK policing has rightly been highlighted, it is unjustifiable to use a technology that will make this even worse,” said Goulding.

“It is impossible to regulate for the dangers created by a technology that is oppressive by design. The safest, and only, thing to do with facial recognition is to ban it.”

Analysing the Met’s approach: Proportionality and necessity

Before it can deploy facial-recognition technology, the MPS must ensure that its deployments are “authorised by law”, that the consequent interference with rights (such as the right to privacy) is undertaken for a legally “recognised” or “legitimate” aim, and that this interference is both necessary and proportionate.

For example, the MPS’s legal mandate document – which sets out the complex patchwork of legislation that covers use of the technology – says the “authorising officers need to decide the use of LFR is necessary and not just desirable to enable the MPS to achieve its legitimate aim”.

Karen Yeung, an interdisciplinary professorial fellow in law, ethics and informatics at Birmingham Law School who was called in as an expert witness during the House of Lords police tech inquiry, said there needs to be an individualised justification for each specific LFR deployment.

However, in response to questions about how the force decided each individual deployment was both necessary and proportionate, the MPS has given the same answer to Computer Weekly on multiple occasions.

“The deployment was authorised on the basis of an intelligence case and operational necessity to deploy, in line with the Met’s LFR documents,” it said, adding in each case that “the proportionality of this deployment was assessed giving due regard to the intelligence case and operational necessity to deploy, whilst weighing up the impact on those added to the watchlist and those who could be expected to pass the LFR system”.

On the MPS’s responses, Yeung said: “That’s not good enough, we need to know what the intelligence case is, we can’t take that on faith,” adding that the claims “would not, in my view, meet the test of legality”.

Yeung added that while there are a number of “legally recognised purposes” (such as national security, prevention of disorder or public safety) state authorities can use to intrude on people’s rights, proportionality and necessity tests are already well-established in case law, and exist to ensure these authorities do not unduly interfere.

“In the case of police, they’re going to say ‘it’s prevention of disorder or crime, or public safety’, so they get past first base, but then one of the questions is, ‘is this necessary in a democratic society?’” she said.

“There’s a very rich case law about what that means, but the core test is you can’t use a hammer to crack a nut. So even though a machete might be perfectly good for achieving your task, if a pen knife will do, then you can only use the pen knife, and the use of a machete is unlawful because it’s disproportionate … the basic way of explaining it is that it has to go no further than necessary to achieve the specified goal.”

Relating this back to the MPS’s use of LFR, Yeung said the question then becomes, “is it really necessary to protect the public, to be intruding on 100,000 faces in a few days?”

She further added that, based on the evidence from its test deployments up until 2019, in which most people arrested through the use of LFR were done so for drug possession offences, MPS deployments are neither proportionate nor necessary, and that any suggestion from ministers or senior figures in policing that LFR is being used to stop serious violence or terrorism are  “empty claims” without convincing evidence.

It should be noted that of the eight people arrested as a result of the MPS’s 2022 LFR deployments, at least four were arrested in connection to drug possession offences.

“Drug possession offences are hardly violent crime,” said Yeung. “One of the questions is around the urgency and severity of the need to intrude on privacy. So, for example, if there was a terrorist on the loose and we knew that he or she was likely to be in London, even I would say ‘it’s OK’ if the aim is to seek to apprehend a known very dangerous suspect.

“For a limited deployment to catch a very specific person who’s highly dangerous, that is legitimate, but you need to be very clear about specifying the conditions because of the danger that these things become completely blown out of proportion to the seriousness of a specific, pressing social need.”

Russell agreed the arrests made using LFR simply do not match up with MPS’s publicly stated purposes for using the technology, which is “targeting violent and other serious crime”, and “locating those wanted by the courts and subject to an outstanding warrant for their arrest”.

“There’s nothing about catching people in relation to possession with intent to supply,” she said. “They’re meant to deploy for a particular purpose, but actually the people they’re arresting don’t even necessarily come under that deployment justification.”

Disproportionality built into watchlists

According to both Russell and Yeung, the size and composition of the MPS’ LFR watchlists also brings into question the proportionality and necessity of its deployments.

“One of the important questions is whose face goes on the watchlist?” said Yeung, adding that it should be limited to those wanted for serious crime, such as violent offenders as per the MPS’s own claims: “Anything less – drug offences, pickpockets, shop lifters – their faces should not be on the watchlist.”

A major part of the issue with watchlists is the use of custody images. While the force’s LFR Data Protection Impact Assessment (DPIA) says that “all images submitted for inclusion on a watchlist must be lawfully held by the MPS”, a 2012 High Court ruling found that its retention of custody images was unlawful because unconvicted people’s information was being kept in the same way as those who were ultimately convicted. It also deemed the minimum six-year retention period to be disproportionate.

Addressing the Parliamentary Science and Technology Committee in March 2019, then-biometrics commissioner Paul Wiles said there was “very poor understanding” of the retention period surrounding custody images across police forces in England and Wales.

He further noted that while both convicted and unconvicted people could apply to have their images removed, with the presumption being that the police would do this if there was no good reason not to, there is “little evidence it was being carried out”.

In response to questions about how it has resolved the issue of unlawful custody image retention, the MPS has cited section 64A of the Police and Criminal Evidence Act 1984 to Computer Weekly on a number of occasions, which gives police the power to photograph people detained in custody and to retain that image.

According to Russell, people from certain demographics or backgrounds then end up populating its watchlists: “If you think about the disproportionality in stop and search, the numbers of black and brown people, young people, that are being stopped, searched and arrested, then that starts to be really worrying because you start to get disproportionality built into your watchlists.”

In the wake of the MPS’s 28 January deployment in Oxford Circus, which used a watchlist containing 9,756 images (all subsequent watchlists used in 2022 by the MPS were around the 6,700 mark), director of Big Brother Watch, Silkie Carlo, told Computer Weekly: “That’s not a targeted and specified deployment because of a pressing need – it’s a catch net.”

Operational trials

A key point of contention around the MPS’s deployments is the force’s insistence that it is only trialing the technology, which critics say is a false characterisation given it is deployed in an operational context with the aim of identifying, arresting and prosecuting real-life suspects.

In response to Computer Weekly’s questions about whether the MPS has recreated operational conditions in a controlled environment without the use of real-life custody images, it said: “The MPS has undertaken significant diligence in relation to the performance of its algorithm.” It added that part of this diligence is in continuing to test the technology in operational conditions.

“Alongside the operational deployment, the Met tested its facial-recognition algorithms with the National Physical Laboratory [NPL],” it said. “Volunteers of all ages and backgrounds walk past the facial-recognition system … After this, scientific and technology experts at the NPL will review the data and produce a report on how the system works. We will make these findings public once the report has been completed.”

In the “Understanding accuracy and bias” document on the MPS website, it added that algorithmic testing in controlled settings can only take the technology so far, and that “further controlled testing would not accurately reflect operational conditions, particularly the numbers of people who need to pass the LFR system in a way that is necessary to provide the Met with further assurance”.

Despite using volunteers to test the system in an unknown number of its trials, the MPS confirmed to Computer Weekly that “the probe images of the ‘volunteers’ is not loaded to the live watchlist – testing of those images will be conducted offline”.

Unlike members of the public walking past the system, the MPS’s test plan strategy lays out that these volunteers – whose images are not included in the live watchlists – are able to consent to their faces being scanned, are compensated with payment, provided with a point of contact in the MPS to exercise their data rights, and given full information on their roles and how their data is processed.

Yeung does not dispute the need to test out technologies like LFR, but says there needs to be a strict legal regime in place to make the testing safe, and that any testing should be conducted under specific ethical and legal restraints in a similar fashion to academic research. Otherwise, it should not be able to proceed.

Although Yeung says operational use of LFR should be preceded by trial deployments using voluntary participants only, which she described as a much more “ethical and proportionate way of testing”, she noted that the MPS never considered this in its initial live deployments, which started at Notting Hill Carnival in 2016: “They just went straight into sticking faces of real people in watchlists without their consent, for trivial crimes, and others not for any crimes at all, but included people thought to be ‘of interest’ to the police, which appeared to include individuals who engage in lawful democratic protest.”

In July 2019, a report from the Human Rights, Big Data & Technology Project based at the University of Essex Human Rights Centre – which marked the first independent review into trials of LFR technology by the MPS – highlighted a discernible “presumption to intervene” among police officers using the technology, meaning they tended to trust the outcomes of the system and engage individuals that it said matched the watchlist in use, even when they did not.

On how it has resolved this issue, the MPS said it had implemented additional training for officers involved in facial-recognition operations, adding that “officers are reminded during the training of the importance of making their own decisions on whether to engage with a member of the public or not”.

However, given the issues around custody image retention and officers’ presumption to intervene, Yeung said it is important to recognise that UK police have no power to interfere with a person who is acting lawfully going about their own business in public, and that, outside of specific statutory powers under counter-terror legislation, they cannot ever legally stop someone without reasonable suspicion.

“Even if your face was accurately matched to a database, that doesn’t necessarily mean they have reasonable suspicion that you are about to engage in a crime, or that you have engaged in a crime, unless we have assurance that the only people on the watchlist are those who were wanted for past crimes,” she said, adding that, given the further accuracy problems associated with LFR, the police need to be aware that the person matched by the system may not even be the person they are looking for.

“Under current law, police only have the legal power to intervene with an individual on the basis of ‘reasonable suspicion’ for a past crime, or likely to commit a crime. So, a person who’s been erroneously identified would seem to have no legal obligation to cooperate. What that means is that the ‘human-in-the-loop’ needs to elicit cooperation from that person on the basis of consent.

“That means police officers must be polite, they need to be deferential, and above all they must request cooperation so that this person may disclose their identity voluntarily. What happens is people don’t realise they don’t have an obligation to cooperate in those circumstances; they’re so taken aback by the fact they’ve been stopped that they quickly get out their ID, but in fact because they may not be a correct match, or for other reasons that do not amount to establishing that the police have a reasonable basis for suspicion. If that person is not in fact a reasonable suspect, they have no legal obligation to cooperate. I suspect that such matters are not included in the training.”

On the characterisation of LFR’s operational use as trials, Russell added that “it doesn’t feel like good governance to be doing this on a wing and prayer: you’ve got to know what you’re doing and be really sure you’ve worked through all the issues so that people’s well-being and privacy is protected”.

Power dynamics and the red herring of accuracy

According to Yeung, even if LFR technology gets to the point where it is able to identify faces with 100% accuracy 100% of the time, “it would still be a seriously dangerous tools in the hands of the state”, because “it’s almost inevitable” that it would continue to entrench existing power discrepancies and criminal justice outcomes within society.

“Who are the persons of interest to police? They’re not more wealthy, well-heeled, well-to-do middle class people, they’re people from ethnic minorities, people who are considered to be ‘undesirable’, likely to be ‘at-risk’, likely to be ‘disruptive’, including political and environmental protestors, who use more visible methods to express their political objections – all of these people will likely be regarded by the police as falling within the net, without question,” she said, noting the MPS never deploys LFR in areas such as Canary Wharf.

“There are plenty of drug offences going on in those communities, so why aren’t we sticking the LFR there? It will be the most disadvantaged who will be increasingly stigmatised and fearful of the way these technologies are used.”

Yeung added that while accuracy issues with LFR put the stop and search burden on those who are more likely to be erroneously matched (due to police officer’s presumption to intervene leading to situations where they are forced to identify themselves), it is ultimately a “red herring”, because “even if it was 100% accurate, the worries would still be profound”.

“I don’t think it’s rocket science – if you’re the state, and you want to exert control over your population, this is a dream technology,” she said. “It’s completely in the state’s interest to be able to exert more fine grained control. The benefits of these powerful technologies are alleged to lie in their capacity to enable law enforcement officials to ‘find terrorists’ and ‘locate missing children’, but there is no evidence of effectiveness in successfully apprehending individuals of this kind as far as I am aware. I haven’t seen a shard of it.”

Russell reiterated the point that watchlists themselves are constructed based on historic arrest data. “If you’ve got a cohort of people who have been arrested, and we know there is disproportionality in the number of people from black and brown communities who get arrested, then you’ve got an in-built disproportionality that is properly worrying,” she said.

However, the problem ultimately comes down to governance. “You could have deployed accurate facial recognition technology, where the governance around it means it is completely unacceptable,” said Russell. “In terms of the invasion of privacy as we walk around the streets of our city, it’s not okay for our faces to be constantly scanned and for people to know where we’re going, for all sorts of reasons … it comes down to a basic right to privacy.

“The need for the deployment of facial recognition to have really clear safeguards around it is absolutely critical. That technology should only be used in very extreme circumstances … there’s got to be real understanding of the flaws in the technology so that the people using it are aware of the way their unconscious bias and their bias to accept the technology’s [identifications affects the outcomes].”

She further added that it was “inexcusable” that the MPS is continuing to use LFR technology in live deployments “without having resolved the governance issues and ensuring that people’s rights are safeguarded”.

Yeung concluded that the MPS ramping up its use of LFR represents “a crucial point in time” before the roll-outs are completely normalised. “My worry is that if the police continue to push ahead, by stealth, without open, transparent dialogue, by consent with our populations, then we will find ourselves in a situation where the use of LFR has been completely normalised, and it will be too late, the horse will have bolted,” she said.

“Of course it’s in the interest of law enforcement, but it’s not in the interest of democracy, the interests of freedom and liberty, or the interests of vulnerable communities who have been subject to stigmatisation and oppression,” said Yeung.

“We need to have a much more open, public, evidence-based conversation to decide whether and on what terms we’re willing to accept these technologies, and they need to be subjected to far more rigorous, meaningful and effective institutional safeguards.”

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@technewsboy.com. The content will be deleted within 24 hours.
Exit mobile version