It has been more than five years since the Courts last discussed the use of AI in public decision-making. That was in R (Bridges) v Chief Constable of South Wales Police (Information Commissioner and others intervening) [2020] 1 WLR 5037). In the universe of AI, five years is a mind-bendingly long time. Can we even remember a time before hallucinated authorities and lonely teenagers falling in love with chatbots? Many of us have been waiting with bated breath for the next word on how the Administrative Court will constrain public authorities’ use of AI.
After that long wait, we have been given R (Thompson and Carlo) v Commissioner of Police for the Metropolis [2026] EWHC 915 (Admin).
Live Facial Recognition Technology (“LFR”)
Both Bridges and Thompson concerned the use of LFR on an overt basis by the police, specifically in order to locate persons whom the police were looking for in pursuit of their policing objectives. In a nutshell, when deploying this technology, the police use dedicated CCTV cameras in public settings in order to capture the images of all members of the public passing the cameras. The facial features of the individuals as shown in the captured images are then analysed to create unique biometric data, expressed as a set of numerical values. That biometric data is in turn compared with the biometric data of all individuals whom the police have placed on the relevant watchlist (i.e. the list of people whom they are seeking to locate through the relevant LFR deployment). If there is no “positive match”, the biometric data is immediately deleted (this happens within a fraction of a second of the data being captured). If there is a positive match, the images are examined by officers, who will then decide what action should be taken.
The judgment in Bridges
In Bridges, the Court of Appeal found two ways in which LFR was being deployed unlawfully in South Wales.
The first was that the police had breached their public sector equality duty by not taking reasonable steps to assess whether the software being used built in unacceptable biases on grounds of race or sex.
The second was that the policy under which the technology being deployed left the police with too wide a discretion. It therefore breached the requirement that an interference with a person’s right to a private life (under Article 8 EHCR) must be “in accordance with the law” (“IAWL”). That was primarily because the policy did not explain “where” the technology would be deployed or “who” it would be used to find. Therefore, the police could use it in arbitrary and unforeseeable ways.
Only the second of these issues returned for our next instalment in this blockbuster series.
The claim in Thompson
In Thompson, the Claimants challenged the use of LFR by the Met. They got permission to advance two grounds, which both raised essentially the same issue. Was the Met’s policy IAWL for the purposes of Article 8? And was it prescribed by law (“PBL”) for the purposes of the Article 10 right of freedom of expression and the Article 11 right of freedom to associate.
It was common ground that IAWL and PBL amounted to the same thing .
Why the outcome was different this time
The Met had a big advantage over the police force in Bridges. They had the judgment in Bridges, so they knew exactly what had troubled the Court of Appeal last time.
In fact, before the claim was issued, the Met had already planned a review of its LFR policy (“the Policy”) (§22-24). It accepted that some parts of the Policy may give the “impression that it took a more permissive approach” than intended. Therefore, after the claim was issued, the Met applied to the Court for a stay so that it could complete its review. The Court granted the stay. The Met then set about clarifying the Policy
And it succeeded.
In the judgment, the Court held it had not seen any evidence that the technology and how it was being used by the police had changed significantly since Bridges (§205). Therefore, the Court could proceed from the same starting point in terms of what was required for the Policy to be IAWL and PBL (§207). That was that “laws and policies must be sufficiently foreseeable in their terms to give individuals an adequate indication as to the circumstances in which and the conditions on which the authorities are entitled to resort to measures affecting their rights under the ECHR” (§212).
The Court found that the Met’s use of LFR, as governed by the Policy, met that standard.
The Policy took a detailed approach to answering the “why”, “where” and “who” questions. Different parts of the Policy addressed each of these questions. It broke down the analysis of each of them, by setting requirements for a series of “use cases” for LFR. For example, different requirements had to be met if the technology was being used to support “policing of a crime hotspot” as opposed to “protection of critical national infrastructure” (§§92-117). Further, the Policy required officers to consider proportionality in detail, and gave them extensive guidance on the factors to be considered.
In particular, the Court was not persuaded by arguments from the Claimant that, viewed in isolation, parts of the “where” elements of the policy could justify the deployment of LFR technology in more than half of London. The mere fact that a discretionary power is broad is insufficient to make that power not IAWL; rather the question is whether the exercise of the power is subject to principles which prevent decisions that are dependent on the will of the decision-maker or arbitrary (§60). The Court accepted the Met’s analysis that all the requirements of the policy were “interlocking” (§216, §222), and acted together to ensure that the application of the policy had the quality of law. The Policy’s reference to the Met’s “operational experience” regarding crime rates was sufficiently specific, and the police are entitled to rely on their specialist and corporate knowledge when deploying LFR, and indeed this is something the public would expect (§§195-197). Further, the guidance on proportionality acted as an effective safeguard against arbitrary outcomes (§§224-227).
The judgment demonstrates that, as in any area of judicial review, processes are easiest to defend if they are underpinned by detailed, careful policies, drafted with a close eye on the law.
Distractions
The Court gave short shrift to arguments outside the scope of the claim, refusing to admit numerous witness statements supporting them and criticising irrelevant arguments. This was not a claim about the proportionality of LFR technology, or its application in a particular scenario. To the extent that discrimination arguments might have been relevant to the foreseeability of the policy, they had been insufficiently developed and were not supported by evidence (§210).
The next AI blockbuster?
As AI becomes more and more embedded in governmental decision-making, I doubt it will be another five years before the next judgment on this topic. The courts seem to accept the reality that AI is here to stay as part of public decision-making, so cases are unlikely to turn on whether its use is lawful in principle. However, many questions remain on specifically when and how it should be used.
Here, the Court was unmoved by complaints of discrimination where those complaints were unsupported by proper evidence. However, that issue is likely to be fertile ground for challenge in future claims, as it was in Bridges.
For now, the wait for the next AI blockbuster resumes.
Anya Proops KC and Raphael Hogarth of 11KBW acted for the Met, together with Robert Talalay of 5 Essex Chambers, instructed by Rex Nicholls and Chloe Cambridge at the Metropolitan Police Service.
Hannah Slarks