How police live facial recognition subtly reconfigures suspicion | presscode.gr

How police live facial recognition subtly reconfigures suspicion

Police use of live facial recognition (LFR) technology reconfigures suspicion in subtle yet important ways, undermining so-called human-in-the-loop safeguards.

Despite the long-standing controversies surrounding police use of LFR, the technology is now used in the UK to scan millions of people’s faces every year. While initial deployments were sparse, happening only every few months, they are now run-of-the-mill, with facial recognition-linked cameras regularly deployed to events and busy areas in places like London and Cardiff.
Given the potential for erroneous alerts, police forces deploying the technology claim that a human will always make the final decision over whether to engage someone flagged by an LFR system. This measure is intended to ensure accuracy and reduce the potential of unnecessary police interactions.
However, a growing body of research highlighting the socio-technical nature of LFR systems suggests the technology is undermining these human-in-the-loop safeguards, by essentially reshaping (and reinforcing) police perceptions of who is deemed suspicious and how police interact with them on the street as a result.

A growing body of research
According to one paper from March 2021 – written by sociologists Pete Fussey, Bethan Davies and Martin Innes – the use of LFR “constitutes a socio-technical assemblage that both shapes police practices yet is also profoundly shaped by forms of police suspicion and discretion”.
The authors argue that while under current police powers, officers recognising someone may constitute grounds for a stop and search, this changes when LFR is inserted into the process, because the “initial recognition” does not result from an officer exercising discretion.
“Instead, officers act more akin to intermediaries, interpreting and then acting upon a (computer-instigated) suggestion originating outside of, and prior to, their own intuition,” the sociologists wrote. “The technology thus performs a framing and priming role in how suspicion is generated.”
More recently, academics Karen Yeung and Wenlong Li argued in a September 2025 research paper that, given the potential for erroneous matches, the mere generation of an LFR match alert is not in itself enough to constitute “reasonable suspicion”, which UK police are required to demonstrate to legally stop and detain people.
“Although police officers in England and Wales are entitled to stop individuals and ask them questions about who they are and what they are doing, individuals are not obliged to answer these questions in the absence of reasonable suspicion that they have been involved in the commission of a crime,” they wrote.
“Accordingly, any initial attempt by police officers to stop and question an individual whose face is matched to the watchlist must be undertaken on the basis that the individual is not legally obliged to cooperate for that reason alone.”
Despite being legally required to have reasonable suspicion, a July 2019 paper from the Human Rights, Big Data & Technology Project based at the University of Essex Human Rights Centre, which marked the first independent review into trials of LFR technology by the Metropolitan Police, observed a discernible “presumption to intervene” among police officers using the technology.
According to authors Fussey and Daragh Murray, who is a reader in international law and human rights at Queen Mary’s School of Law, this means the officers involved tended to act on the outcomes of the system and engage individuals that it said matched the watchlist in use, even when they did not.
As a form of automation bias, the “presumption to intervene” is important in a socio-technical sense, because in practice it risks opening up random members of the public to unwarranted or unnecessary police interactions.

Priming suspicion
Although Yeung and Li noted that individuals are not legally obliged to cooperate with police in the absence of reasonable suspicion, there have been instances where failing to comply with officers after an LFR alert has affected people negatively.
In February 2025, for example, anti-knife crime campaigner Shaun Thompson, who was returning home from a volunteer shift in Croydon with the Street Fathers youth outreach group, was stopped by officers after being wrongly identified as a suspect by the Met’s LFR system.
Thompson was then held for almost 30 minutes by officers, who repeatedly demanded scans of his fingerprints and threatened him with arrest, despite being provided with multiple identity documents showing he was not the individual on the database.
Thompson has publicly described the system as “stop and search on steroids” and said it felt like he was being treated as “guilty until proven innocent”. Following the incident, Thompson launched a judicial review into the Met’s use of LFR to stop others ending up in similar situations, which is due to be heard in January 2026.
Even when no alert has been generated, there are instances where the use of LFR has prompted negative interactions between citizens and the police.
During the Met’s February 2019 deployment in Romford, for example, Computer Weekly was present when two members of the public were stopped for covering their faces near the LFR van because they did not want their biometric information to be processed.
Writing to the Lords Justice and Home Affairs Committee (JHAC) in September 2021 as part of its investigation into policing algorithms, Fussey, Murray and criminologist Amy Stevens noted that while most surveillance in the UK is designed to target individuals once a certain threshold of suspicion has been reached, LFR inverts this by considering everyone that passes through the camera’s gaze as suspicious in the first instance.
This means although people can be subsequently eliminated from police inquiries, the technology itself affects how officers see suspicion, by essentially “priming” them to engage with people flagged by the system. 
“Any potential tendency to defer or over-rely on automated outputs over other available information has the ability to transform what is still considered to be a human-led decision to de facto an automated one,” they wrote.
“Robust monitoring should therefore be in place to provide an understanding of the level of deference to tools intended as advisory, and how often and in which circumstances human users make an alternative decision to the one advised by the tool.”

Watchlist creation and bureaucratic suspicion
A key aspect mediating the relationship between LFR and the concept of “reasonable suspicion” is the creation of watchlists.
Socio-technically, researchers investigating LFR use by police have expressed a number of concerns around watchlist creation, including how it “structures the police gaze” to focus on particular people and social groups.
In their 2021 paper, for example, Fussey, Davies and Innes noted that creating watchlists from police-held custody images naturally means police attention will be targeted toward “the usual suspects”, inducing “a technologically framed bureaucratic suspicion in digital policing”.
This means that, rather than linking specific evidence from a crime to a particular individual (known as ‘incidental suspicion’), LFR instead relies on the use of general, standardised criteria (such as a person’s prior police record or location) to identify potential suspects, which is known in sociology as “bureaucratic suspicion”.
“Individuals listed on watchlists and databases are cast as warranting suspicion, and the AFR (automated facial recognition) surveillant gaze is specifically oriented towards them,” they wrote.
“But, in so doing, the social biases of police activity that disproportionately focuses on young people and members of African Caribbean and other minority ethnic groups (inter alia The Lammy Review 2017) are further inflected by alleged technological biases deriving from how technical accuracy recedes for subjects who are older, female and for some people of colour.”
Others have also raised separate concerns about the vague criteria around watchlist creation and the importance of needing “quality” data to feed into the system.
Yeung and Li, for example, have highlighted “unresolved questions” about the legality of watchlist composition, including the “significance and seriousness” of the underlying offence used to justify a person’s inclusion, and the “legitimacy of the reason why that person is ‘wanted’ by the police” in the first place.
As an example, while police repeatedly claim that LFR is being used solely on the most serious or violent offenders, watchlists regularly contain images of people for drug, shoplifting or traffic offences, which legally do not meet this definition.
Writing in their September 2025 paper, Yeung and Li also noted that while the Met’s watchlists were populated by individuals wanted on outstanding arrest warrants, they also included “images of a much broader, amorphous category of persons” who did not meet the definition of serious offenders.
This included “individuals not allowed to attend the Notting Hill Carnival”, “individuals whose attendance would pose a risk to the security and safety of the event”, “wanted missing” individuals and children, and even individuals who “present a risk of harm to themselves and to others” and those who “may be at risk or vulnerable”.
In December 2023, senior officers from the Met and South Wales Police confirmed that LFR operates on a “bureaucratic suspicion” model, telling a Lords committee that facial recognition watchlist image selection is based on generic crime categories attached to people’s photos, rather than a context-specific assessment of the threat presented by a given individual.
The Met Police’s then-director of intelligence, Lindsey Chiswick, further told Lords that whether or not something is “serious” depends on the context, and that, for example, retailers suffering from prolific shoplifting would be “serious for them”.
While the vague and amorphous nature of police LFR watchlist creation has been highlighted by other academics – including Fussey et al, who argued that “broad categories offer significant latitude for interpretation, creating a space for officer discretion with regards to who was enrolled and excluded from such databases” – the issue has also been highlighted by the courts.
In August 2020, for example, the Court of Appeal ruled that the use of LFR by South Wales Police was unlawful, in part because the vagueness of the watchlist criteria – which used “other persons where intelligence is required” as an inclusion category – left excessive discretion in the hands of the police.
“It is not clear who can be placed on the watchlist, nor is it clear that there are any criteria for determining where AFR can be deployed,” said the judgment, adding that, “in effect, it could cover anyone who is of interest to the police.”
During the December 2023 Lords session, watchlist size was also highlighted by Yeung – who was called to give evidence given her expertise – as an important socio-technical factor.
“There is a divergence between the claims that they only put pictures of those wanted for serious crimes on the watchlist, and the fact that in the Oxford Circus deployment alone, there were over 9,700 images,” she said.

Unlawful custody images retention
Further underpinning concerns about the socio-technical impacts of watchlist creation, there are ongoing issues with the unlawful retention of custody images in the Police National Database (PND). This represents the primary source of images used to populate police watchlists.
In 2012, a High Court ruling found the retention of custody images in the PND to be unlawful on the basis that information about unconvicted people was being treated in the same way as information about people who were ultimately convicted, and that the six-year retention period was disproportionate.
Despite the 2012 ruling, millions of custody images are still being unlawfully retained.
Writing to other chief constables to outline some of the issues around custody image retention in February 2022, the National Police Chiefs Council (NPCC) lead for records management, Lee Freeman, said the potentially unlawful retention of an estimated 19 million images “poses a significant risk in terms of potential litigation, police legitimacy, and wider support and challenge in our use of these images for technologies such as facial recognition”.
In November 2023, the NPCC confirmed to Computer Weekly that it had launched a programme that would seek to establish a management regime for custody images, alongside a review of all currently held data by police forces in the UK.
The issue was again flagged by the biometric commissioner of England and Wales, Tony Eastaugh, in December 2024, when he noted in his annual report that “forces continue to retain and use images of people who, while having been arrested, have never subsequently been charged or summonsed”.
Eastaugh added that while work was already “underway” to ensure the retention of images is proportionate and lawful, “the use of custody images of unconvicted individuals may include for facial recognition purposes”.


Δημοσιεύτηκε: 2025-12-08 10:29:00

πηγή: www.computerweekly.com