MEPs approve report opposing certain uses of AI by police
Members of the European Parliament (MEP) have approved a report on the use of artificial intelligence (AI) by police in Europe, which opposes using the technology to “predict” criminal behaviour and calls for a ban on biometric mass surveillance.
MEP’s also voted down amendments to the Parliament’s Committee on Civil Liberties, Justice and Home Affairs’ (LIBE) report on AI in criminal matters, which critics claimed would have weakened the bloc’s commitment to fundamental human rights if accepted by opening the door to mass surveillance.
While it is not legally binding, the LIBE report outlines European policymakers’ collective thinking on how the use of AI by police should be regulated and, according to European Digital Rights (EDRi), could influence later debates on the European Commission’s (EC) proposed AI Act published in April 2021.
The full text of the report, which was approved in a 377 to 248 vote on 5 October 2021, clearly identifies that many AI-driven identification techniques in use today “disproportionately misidentify and misclassify, and therefore cause harm to racialised people, individuals belonging to certain ethnic communities, LGBTI people, children and the elderly, as well as women”.
It further “highlights the power asymmetry between those who employ AI technologies and those who are subject to them”, and calls for outright bans on the use of AI-related technologies for proposing judicial decisions; for any biometric processing that leads of mass surveillance in public spaces; and for the mass-scale scoring of individuals.
Voting down amendments
On the three amendments specifically – all made by the centre-right European People’s Party (EPP) group – civil society groups said they would open the door to mass biometric surveillance, as well as enable discriminatory predictive policing practices.
The first EPP amendment, for example, removed opposition to the use of predictive policing systems, and instead called for it to be deployed by police with “the utmost caution…when all necessary safeguards are in place to eliminate enforced bias”.
The second amendment, meanwhile, removed the call for a moratorium on the use of facial-recognition and other biometric technologies, which stated that this should be in place at least until the technical standards can be considered fully fundamental rights compliant.
Instead, the EPP amendment called for an improvement in technical standards, and stressed “that democratic oversight and control should be further strengthened with a view to ensuring that such technologies are only used when necessary and proportionate”.
In its third amendment, the EPP removed the report’s call for the EC to stop funding AI-related research or deployments “that are likely to result in indiscriminate mass surveillance in public spaces”; and instead argued that it should only stop funding projects that contribute to mass surveillance “which is not consistent with… [EU] and national law”.
It added that AI-enabled mass surveillance should not be banned when “strictly necessary for very specific objectives… [with] prior judicial authorisation”, and with strict “place and time” limits on data processing.
In an open letter published 4 October, EDRi and 39 other civil society groups – including Access Now, Fair Trials, Homo Digitalis and the App Drivers and Couriers Union (ADCU) – called on MEPs to vote against the amendments on the basis that they will allow discriminatory predictive policing and mass biometric surveillance.
“The adoption of these amendments would undermine the rights to a fair trial, a private and family life, non-discrimination, freedom of expression and assembly, data protection rights, and – fundamentally – the presumption of innocence,” said the letter.
“We strongly believe the report in the iteration adopted by the LIBE Committee took the most balanced and proportional stance on AI in law enforcement from a fundamental rights perspective.”
In response to MEP’s vote, Griff Ferris, legal and policy officer at Fair Trials, which campaigns for criminal justice equality globally, said: “We are very pleased that a significant majority of MEPs rejected the amendments to the LIBE report, taking a stand against AI and automated decision-making systems which reproduce and reinforce racism and discrimination, undermine the right to a fair trial and the presumption of innocence, and the right to privacy.
“This is a landmark result for fundamental rights and non-discrimination in the technological age. MEPs have made clear that police and criminal justice authorities in Europe must no longer be allowed to use AI systems which automate injustice, undermine fundamental rights and result in discriminatory outcomes.
“This is a strong statement of intent that the European Parliament will protect Europeans from these systems, and a first step towards a ban on some of the most harmful uses, including the use of predictive and profiling AI, and biometric mass surveillance.”
Fair Trials previously called for an outright ban on using AI and automated systems to “predict” criminal behaviour in September 2021.
Writing on Twitter, a senior policy advisor at EDRi, Sarah Chander, said about the proposed amendments that “telling the police to ‘use the utmost caution’ when deploying predictive policing and facial-recognition…does not sound like an adequate plan to protect our rights and freedoms.”
Green MEP Alexandra Geese added: “In light of already over 60,000 Europeans having signed the Reclaim Your Face petition, and the responsible committee’s report following suit and calling for a much needed ban of biometric mass surveillance, it is simply outrageous that the Conservatives still try to push their idea of an AI police state through plenary.”
Contents of the debate
The vote in favour of the report and against the amendments was preceded by a debate on 4 October about the benefits and dangers of using AI in the context of law enforcement, which largely revolved around the potential of biometric technologies to enable mass surveillance.
While most MEPs participating in the debate acknowledged the risks to fundamental rights associated with the use of AI by the police and judiciary, their tolerance towards that risk and how it could or should be managed diverged significantly.
Some, for example, believed the risks presented biometric AI technologies are so great that law enforcement agencies should simply be banned from using it.
“We believe that in Europe there’s no room for mass biometric surveillance, and that fighting crime cannot be done to the detriment of citizens’ rights,” said Brando Benifei, a member of the Progressive Alliance of Socialists and Democrats, adding that biometric surveillance in public spaces will undermine key democratic principles, including freedom of expression, association and movement.
“At the same time, [using] predictive techniques to fight crime also have a huge risk of discrimination, as well as lack of evidence about how accurate they actually are, were undermining the basis of our democracy [and] the presumption of innocence.”
He added that the European Commission’s proposed AI Act, as it currently stands, does not provide the necessary guarantees for protecting fundamental rights, as even in the most high-risk use cases AI developers themselves are in charge of determining the extent to which their systems align with the regulation’s rules, otherwise known as ‘conformity assessments’.
“We believe that self-evaluation entails too much of a risk of error and violation that will only be discovered later by the security authorities, if they have the means available to them to do that – that will lead to irreparable damage in people’s lives.”
Others, however, said while it would need to be accompanied by legal safeguards, AI was a vital tool for fighting crime and would be absolutely necessary to the security of the state; especially in the face of newly digitally enabled criminal activity.
“Today, criminals are shifting their operations. Whether it is an organised crime, terrorism child porn, money laundering, or human trafficking, it happens online,” said EPP member Tom Vandenkendelaere, adding that AI – including facial-recognition in public spaces – will enable police to fight crime in a more targeted and efficient manner.
“This does not mean that we want to give police forces a carte blanche to do whatever they want. It’s our duty as policymakers to set up a strong legal framework within which they can safely use AI while guaranteeing the safety of our citizens.
“It’s too easy to argue for moratoria or bans without taking into account the challenges our police officers deal with on the grounds. It is our duty…to find the right balance between the use of new technologies on the one hand, and the protection of our fundamental rights on the other hand – we have to remain vigilant, but we should not throw out the baby with the bathwater.”
Other MEPs variously noted the need for AI to better fight cyber crime, terrorism and money laundering, although most of them did acknowledge the need for strong safeguards.
Jean-Lin Lacapelle, an MEP for the far-right Identity and Democracy (ID) group, took the view that AI has “been spoiled by the European Union” because instead of “ensuring the security of ourselves and our children, they’re limiting the use” by not using it “in fighting delinquency” and to detect asylum seekers “lying” about their age at the border.
Some MEPs also directly challenged the EPP on its amendments to the report, including Renew Europe member Svenja Hahn, who said that human rights were non-negotiable and that the EPP were “trying to push forward their dreams for AI surveillance against our fear of mass surveillance”.
Greens MEP Marcel Kolaja, whose comments opened the debate, rallied against the EPP and its amendments in even stronger terms, accusing them of “torpedoing” the proposed ban on facial-recognition in public spaces “and asking for legal means to spy on citizens”.
Noting that two journalists have been murdered in the EU in the past year, Kolaja added that allowing facial-recognition tech in public spaces would give oligarchs “even more tools in their hands to persecute and oppress journalists”.
Another Green MEP, Kim Van Sparrentak, urged her EPP colleagues to be more realistic: “AI is not a quick solution to fight crime or terrorism – an AI camera will not detect radicalisation, and automating police work is not a substitute for police funding and community workers.
“Looking at the US, in New York City and Boston, replacing AI-driven predictive policing with community policing [has] lowered crime rates, and San Francisco and Boston have already banned biometrics in public spaces – not only is a ban perfectly feasible, we in the EU are far behind in our ethical AI choices.”
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.