The Home Secretary, Shabana Mahmood, has announced plans to expand police use of facial recognition across England and Wales, alongside a new national drive to roll out artificial intelligence tools to cut paperwork and speed up investigations, in a package the Home Office says is worth more than £140 million.

In a press release issued alongside a policing reform White Paper, the Home Office said live facial recognition would be deployed nationally and that the number of live facial recognition (LFR) vans would “triple”, with “50 vans available to every police force” across England and Wales. It also said a new national AI centre, branded “Police.AI”, would help forces adopt AI tools for tasks including transcription, CCTV review, redaction and control room triage.

The announcement matters because it combines two changes that increase the scale and reach of police technology: a structural push towards more national standard-setting and procurement, and a simultaneous expansion of biometric and AI capabilities. Civil liberties specialists have long argued that the bigger the scale of biometric surveillance, the more important it becomes that safeguards are consistent, enforceable and independently audited, because weak practice in one area can become standard practice everywhere.

Mahmood said: “Criminals are operating in increasingly sophisticated ways. However, some police forces are still fighting crime with analogue methods. We will roll out state of the art tech to get more officers on the streets and put rapists and murders behind bars.”

The Home Office said the programme would be accompanied by legislation for a new legal framework for police use of facial recognition and “similar technologies”, arguing that existing governance is spread across multiple laws and guidance and is difficult for the public to navigate.

Alongside the facial recognition expansion, the Home Office said “Police.AI” would be set up to roll out AI to all forces in England and Wales, with the aim of freeing officers from paperwork and returning “up to six million hours” to frontline policing each year, a claim it said was equivalent to 3,000 police officers.

The Home Office described the investment as £141 million, saying £115 million would come as part of its police reform White Paper and an additional £26 million would fund a “national facial recognition system” for police. The government also framed the plan as an attempt to end a “postcode lottery” in access to technology, saying only “a small handful” of forces have implemented automation and AI for forms, and that 15 out of 43 forces have access to LFR.

Some elements of the package reflect an existing policy direction. The Home Office has already been consulting on a clearer statutory framework for facial recognition use, explicitly acknowledging that the current legal basis is a patchwork of common law policing powers, data protection, equalities and human rights obligations, and professional guidance. The Home Office also previously announced funding for LFR vans, including an August 2025 announcement that ten new vans would be provided to seven forces.

However, several headline claims in the new release will be scrutinised for detail and definitional clarity, including what “50 vans available to every police force” means in practice, how “six million hours” was calculated, and what is included in the promised £26 million “national facial recognition system”.

The Home Office said the Metropolitan Police had “caught over 1,700 dangerous criminals” using live facial recognition, including “rapists, domestic abusers and robbers”. That figure echoes recent Met messaging that links large numbers of “dangerous offenders” removed from the streets with LFR deployments, but critics and researchers stress that outcomes can be described in different ways. For a technology that generates alerts rather than final decisions, key questions include whether figures refer to arrests, positive identifications, charges, convictions, or other outcomes such as wanted people being located for reasons unrelated to the alert.

The way the Home Office described safeguards also closely mirrors existing police and government lines on LFR: that it is intelligence-led, uses deployment-specific watchlists, and that potential matches are reviewed by trained officers before action is taken. In the release, the Home Office said: “Live facial recognition will only identify a person if they are on a police watchlist,” and that “facial recognition technology does not make decisions, it only suggests potential matches which are reviewed and confirmed by specially trained officers”.

Privacy and human rights lawyers note, however, that watchlist-limited matching is not the whole privacy picture, because the technology still processes the biometric data of everyone who passes a camera in order to decide they are not a match. UK courts have previously found that even brief biometric processing in public engages privacy rights. In the leading case on police LFR, the Court of Appeal held that South Wales Police deployments were unlawful in part because policies left too much discretion over who could be put on watchlists and where the technology could be used, underlining the importance of clear, consistent limits.

The Home Office said it will legislate for a new framework to give police “clearer, consistent standards” for facial recognition and similar tools, and repeated that police use of facial recognition is governed by data protection, equality and human rights laws, and must be necessary, proportionate and fair. It did not, in the release, set out specific statutory limits on locations or events, nor detail any new redress mechanism for people who believe they have been wrongly flagged or wrongly watchlisted.

The plan also lands amid a growing technical debate over what “facial recognition” means in operational terms, and which type is being expanded. Policing and Home Office documents typically distinguish between at least three uses, which carry different risks and error profiles.

Live Facial Recognition (LFR) uses live video feeds in public places to compare passers-by against a bespoke watchlist for a particular deployment. Retrospective Facial Recognition (RFR) is used after incidents, searching still images from sources such as CCTV, doorbell cameras or mobile phone footage against police databases, including custody image repositories. Operator-Initiated Facial Recognition (OIFR) generally refers to a mobile, officer-driven check in the street against custody images or similar databases.

The Home Office release foregrounded LFR vans and watchlists, but it also said new AI tools would help forces “identify suspects from CCTV, doorbell and mobile phone footage that has been submitted as evidence by the public” — a description that aligns more closely with retrospective facial recognition workflows, and with the rapid scaling of image search across national databases.

That distinction matters for both privacy and bias concerns. Independent testing suggests error rates, and potential demographic disparities, can differ sharply between systems and settings.

For LFR, the Home Office said the algorithm used in the “national LFR capability funded by the Home Office last year” has been independently tested by the National Physical Laboratory (NPL), and claimed NPL “found no statistically significant differences in performance based on gender, age or ethnicity, at settings police use”.

NPL’s published equitability work on operational-style LFR deployments has reported that at a particular face-match threshold (0.6) the system correctly identified watchlisted people about 89% of the time in its tests, and that false positive identification rates were low in those test conditions — around 0.017% (roughly one in 6,000) for a 10,000-image watchlist, and 0.002% (around one in 60,000) for a 1,000-image watchlist. At that threshold, the report did not find statistically significant differences in true positive rates across gender and ethnicity, though it did identify statistically significant differences by age, with poorer performance for younger people, and it warned that lower thresholds can increase false positives and may introduce statistically significant imbalances in who experiences false matches.

Police operational reporting can look different again, partly because forces count “false alerts” at a later stage in the pipeline than an algorithmic false positive. In its most recent annual reporting period, the Metropolitan Police reported processing more than three million faces, generating just over 2,000 alerts, and recording a small number of false alerts. The Met has presented those figures as evidence of very low false alert rates in practice, while acknowledging that human officers review system alerts before any engagement takes place.

Researchers caution that headline “false alert” figures depend on operational choices including thresholds, camera placement, watchlist size, image quality filters, and how an “alert” is defined — for example, whether it is counted before or after officer review. Higher thresholds can reduce false alerts but increase the risk of false negatives, meaning wanted people are missed.

The more acute bias controversy in recent UK debate has centred on retrospective facial recognition rather than live deployments. NPL evaluations of a retrospective facial recognition algorithm used in the Police National Database context have reported stark differences in false positive identification rates by ethnicity and gender at certain threshold settings. In one NPL evaluation at a threshold of 0.8, reported false positive identification rates included 0.04% for White subjects compared with 4.0% for Asian subjects, 5.5% for Black subjects, and 9.9% for Black women. NPL also reported that false positive rates rose sharply when thresholds were lowered in its test setup.

Following publication of concerns about retrospective systems, the Information Commissioner’s Office publicly indicated it wanted urgent clarification after learning about what it described as “historical bias” concerns linked to retrospective facial recognition, and expressed disappointment that it had not been told earlier despite ongoing engagement.

The debate for ministers now is whether a national scale-up can credibly emphasise “no statistically significant bias” while also expanding tools that, in some published testing, have shown substantial demographic disparities in false positive rates. Civil liberties groups and some technology experts argue that any national programme needs system-by-system publication of performance and demographic error metrics, rather than broad claims about “facial recognition” as a single capability.

The Home Office’s reform package also includes wider changes to how policing is organised and managed, which campaigners say could amplify the impact of technology choices. Government media releases on police reform have described moves towards greater centralisation of specialist capabilities and national procurement, and new performance levers such as response-time targets and tighter regimes for measuring force outcomes.

Supporters argue that a national approach can reduce inconsistency between forces and speed up adoption of tools that work. Critics counter that centralisation increases the “blast radius” of any flawed policy, weak safeguard, or biased system, because it allows tools and standards to be rolled out across the country at once.

The planned “Police.AI” roll-out also raises a separate set of concerns from facial recognition: the risk of automation errors, and the operational danger of staff over-trusting AI outputs.

The Home Office said the AI tools being rolled out would include “deepfake detection”, “instant transcription and translation”, “rapid CCTV and media analysis”, “smart audiovisual redaction”, “cutting-edge digital forensics”, robotic process automation, and “smarter control rooms” including AI-powered triage to filter non-policing demand.

Some of those tools are closer to conventional automation, while others can involve AI models that generate summaries or suggested interpretations of information. In policing, errors do not need to be frequent to be serious: a transcription error can change the meaning of a witness account; a flawed translation can distort intelligence; a mistaken image match can divert an investigation; and a summarisation tool can omit caveats that later become central in court.

Concerns over “hallucinations” — AI-generated content that is plausible but wrong — have sharpened after a recent incident in which a UK police chief apologised after incorrect AI-produced information contributed to a narrative presented to MPs, illustrating how quickly an AI error can move into official channels if verification is weak.

There is also evidence that speech-to-text systems can have unequal error rates. A widely cited peer-reviewed study of commercial automated speech recognition systems reported substantially higher word error rates for Black speakers than White speakers in its test corpus, raising questions about whether “instant transcription” tools could introduce new disparities if used at scale in call handling, interviews or intelligence logs.

The Home Office said “Police.AI will also ensure police forces are using AI responsibly, backed by a robust evidence base and with governance arrangements in place that ensure humans remain accountable”. Specialists in police technology governance argue that “human in the loop” safeguards only work if organisations track how often staff override AI outputs, train staff in automation bias, require source-linked evidence for any AI-generated summaries, and log decisions in a way that is auditable.

The Home Office release included supportive quotes from external organisations. Josie Allen, Head of Policy & Partnerships at Missing People, said: “Missing People believe there could be real potential in the government’s push to use live facial recognition more regularly to help find some of the most vulnerable or high-risk missing children and adults. However, we also believe more scrutiny is necessary to ensure the technology is used safely and proportionately. We welcome the government’s efforts to ensure transparency, and the consultation that will allow people who have been missing themselves, and families missing a loved one, to share their views on how to use the technology safely.”

Ryan Wain, Senior Director of Politics & Policy at the Tony Blair Institute, said: “It’s indefensible that people have been denied proven crime-fighting technology because of fragmented police structures. Rolling out live facial recognition nationally is long overdue. Criminals don’t respect force boundaries. Technology that catches them shouldn’t either. With proper safeguards, this is a straightforward boost to public safety. The danger now is delay. Incrementalism is the enemy of safety.”

For minority communities and policing legitimacy, experts say the impact of facial recognition is not only about algorithmic bias but also about deployment patterns and downstream policing powers. Even small error rates can translate into unequal real-world harm if stops and police-public encounters are already disproportionately experienced by certain groups. Questions also remain about watchlist composition: if watchlists draw on historic police data such as custody images and wanted lists, critics argue that existing disparities in the criminal justice system can be reflected in who is more likely to be flagged, even if an algorithm performs evenly across demographics in a laboratory.

The Home Office has not yet published, within the release, the operational detail that will shape whether the programme is experienced as targeted enforcement or as suspicionless surveillance: who can be placed on watchlists, for what reasons, for how long, with what independent approval, and what people can do to challenge inclusion or seek redress after a wrongful stop.

It has also not, in the release, provided technical detail that researchers say is central to public accountability, including the thresholds forces will be instructed to use, how often those settings can be changed, what independent re-testing will be required after vendor updates, and whether demographic error metrics will be published for each system and each use case, including retrospective searches.

The government’s proposal to legislate for a new framework indicates that those questions are now likely to move from guidance and policy into a more formal debate in Parliament. For police leaders, the promise is that clearer rules could reduce legal risk and increase confidence in using tools at scale. For privacy and human rights campaigners, the test will be whether the framework sets hard limits, transparent reporting duties, and meaningful independent oversight — or simply provides a statutory stamp for wider deployment.

The Home Office was asked for clarification on how “50 vans available to every police force” will be delivered in practice, whether that figure refers to a national total or a per-force availability model, and whether it will publish line-by-line detail for the £141 million investment, including spending on procurement, training, maintenance, independent evaluation and governance.

The Home Office was also asked whether it will publish performance data broken down by force for deployments, including numbers of faces scanned, alerts generated, arrests, charges and “no further action” outcomes, and whether it will publish demographic breakdowns of error rates for each facial recognition system used nationally, including retrospective searches.

The government said the reforms were part of what it described as the largest overhaul of policing since the service was founded, setting out a shift “from local to national” in a new model for policing.