AI facial recognition systems misidentify Black faces at rates 10 to 100 times higher than white faces, according to a 2019 government study of 189 algorithms. Despite this documented bias, police departments across the United States continue to treat these systems as near-certain identifiers—a pattern that has now produced at least eight verified wrongful arrests, with victims spending days, weeks, or in one recent case, six months in jail for crimes they did not commit.
The Pattern That Won’t Stop Repeating
In July 2025, Angela Lipps—a 50-year-old grandmother and mother of three from central Tennessee—was at home babysitting four of her grandchildren when U.S. Marshals arrived with guns drawn and arrested her. Fargo, North Dakota police had used facial recognition software on surveillance footage from a series of bank fraud incidents. The software returned Lipps as a match.
She had never been to North Dakota.
Lipps spent 108 days in a Tennessee jail before Fargo police even came to interview her. She wasn’t transported to North Dakota until October 30, 2025. On December 19, 2025, police reviewed her bank records and confirmed she had been over 1,200 miles away in Tennessee during every single transaction. The case against her collapsed.1
She lost her home, her car, and her dog.
This is not an isolated story. It is a pattern with documented victims, documented bias data, and documented institutional failure to act on either.
Who Gets Wrongfully Arrested?
The public record of facial recognition wrongful arrests in the United States is startling in its consistency. Consider the documented cases:
| Victim | Date | System Used | Days Jailed | Outcome |
|---|---|---|---|---|
| Robert Williams | January 2020 | DataWorks Plus (Detroit PD) | 1.25 days | $300K settlement + policy reform |
| Nijeer Parks | January 2019 | Clearview AI (NJ police) | 10 days | Charges dismissed; civil suit filed |
| Michael Oliver | July 2019 | DataWorks Plus (Detroit PD) | Multiple days | Case dismissed; $12M suit filed |
| Porcha Woodruff | February 2023 | Detroit PD system | Hours (hospitalized) | Charges dropped; lawsuit filed |
| Randal Quran Reid | 2022 | Clearview AI (Louisiana) | ~7 days | Cleared; never been to Louisiana |
| Harvey Murphy Jr. | 2022–2024 | EssilorLuxottica system | ~14 days | $10M suit pending |
| Angela Lipps | July–December 2025 | Undisclosed (Fargo PD) | ~160 days | Charges dropped; lost home and car |
Nearly every person on this list is Black. That is not a coincidence. It is a direct consequence of how these systems were built.
The Technical Failure Is Documented, Not Disputed
The racial bias in facial recognition systems traces to a fundamental data problem: training sets are overwhelmingly composed of white faces. “Labeled Faces in the Wild,” a widely used open-source dataset, is 83.5% white.2 Algorithms trained primarily on white male faces between 18 and 35 learn to recognize those faces well—and develop substantially worse performance on faces outside that demographic.
In 2018, MIT Media Lab researcher Joy Buolamwini and Google researcher Timnit Gebru published the “Gender Shades” study, testing commercial facial recognition classifiers from IBM, Microsoft, and Face++. The results were stark:
- Light-skinned men: 0.8% error rate
- Darker-skinned women: up to 47% error rate3
Buolamwini’s own experience as a Black researcher testing these systems illustrates the problem concretely: “In my case, a white mask was a closer fit to what the system had learned was a face than my actual human face.”
The U.S. government confirmed these findings at scale in 2019. The National Institute of Standards and Technology (NIST) ran the Face Recognition Vendor Test (FRVT), evaluating 189 algorithms from developers worldwide. For one-to-one matching, NIST found that false positive rates varied “by factors of 10 to beyond 100 times” depending on race—with African American and Asian faces showing substantially higher false positive rates than Caucasian faces.4 Women of color were the demographic least accurately identified across the full suite of 189 algorithms tested.
This research is not obscure. NIST published it publicly. Law enforcement agencies have had access to it for years.
Why Courts and Police Keep Trusting Biased Systems
The technical evidence against uncritical use of facial recognition is substantial and publicly available. So why does the pattern of wrongful arrests continue?
Facial recognition results are treated as probable cause, not as leads. In the Robert Williams case, the detective confronted Williams with a still from surveillance footage and, when Williams denied being the person in the image, replied: “The computer says it’s you.” That statement reveals an institutional mindset—the algorithm’s output is treated as an authoritative determination, not a starting point for investigation.5
Disclosure is rare. A Washington Post investigation published in October 2024 found that police seldom disclose their use of facial recognition technology in criminal cases, even when that technology produced the lead that led to arrest.6 Defense attorneys frequently don’t know a facial recognition match was used, making it difficult to challenge the evidence or investigate the algorithm’s reliability.
No federal guardrails exist. As of early 2026, the United States has no federal law governing police use of facial recognition. Regulation has fallen to individual states and, in some cases, individual police departments—a patchwork that creates enormous variation in accountability.
Prosecutors and courts lack technical literacy. Facial recognition results arrive wrapped in the credibility of technology. Few prosecutors and fewer judges have training in evaluating the reliability of probabilistic algorithmic outputs. The systems appear objective in a way that eyewitness testimony does not, despite evidence that eyewitness testimony frequently outperforms facial recognition on low-quality images.
What Legal Accountability Looks Like
The Robert Williams case produced the clearest accountability outcome so far. In June 2024, Detroit agreed to a $300,000 settlement—plus attorneys’ fees—and implemented what the ACLU described as the nation’s strongest police department facial recognition policies at the time.7 Under those policies:
- Police cannot arrest someone based solely on a facial recognition result
- Police cannot conduct a photo lineup based on a facial recognition lead without independent corroborating evidence
- Officers must receive training on the technology’s documented risks, including higher error rates for people of color
- An audit was ordered of every case since 2017 in which facial recognition was used to obtain an arrest warrant
The Williams settlement matters because it demonstrates that legal accountability can produce concrete policy reform—but it required a years-long lawsuit to achieve what should have been institutional policy before the first deployment.
State-level legislation is advancing, but unevenly. By the end of 2024, 15 states had enacted laws limiting police use of facial recognition.8 Maryland’s 2024 law is the most comprehensive, explicitly prohibiting facial recognition results from serving as the sole basis for probable cause and requiring independent corroborating evidence. Montana and Utah were first to require warrants for police facial recognition use.
The Private Sector Gap
Police departments are not the only actors creating risk. Harvey Murphy Jr.’s case illustrates what happens when private companies deploy facial recognition without accountability structures.
Murphy, a 61-year-old Black man from Sacramento, was arrested in 2022 for an armed robbery at a Houston Sunglass Hut store operated by EssilorLuxottica, the world’s largest eyewear company. The company’s facial recognition system matched him to low-quality surveillance footage. Murphy was in Sacramento at the time.
He spent nearly two weeks in jail. During that detention, he alleges he was attacked and raped by three inmates, leaving him with permanent injuries. He is now suing EssilorLuxottica and Macy’s for $10 million, alleging negligent use of facial recognition software.9
Private retail deployments of facial recognition operate with even less oversight than law enforcement. There is no equivalent of the Williams settlement requiring Macy’s or EssilorLuxottica to reform how they use these systems. The Murphy lawsuit, if successful, could establish precedent—but civil litigation is an inadequate substitute for proactive regulation.
Clearview AI presents a different dimension of the problem. The company built a database of over 30 billion facial images scraped from the internet without consent and marketed its tool to law enforcement, claiming accuracy rates between 98.6% and 100% based on company-affiliated tests. Multiple wrongful arrests have been traced directly to Clearview results, including Nijeer Parks in New Jersey and Randal Quran Reid in Louisiana.10
In 2022, the ACLU secured a settlement banning Clearview from selling its faceprint database to most private businesses nationwide. A broader biometric privacy settlement reached $51.75 million in 2025. Yet in that same year, Clearview signed a $9.2 million contract with U.S. Immigration and Customs Enforcement.
What Needs to Change
The wrongful arrest crisis in facial recognition is not primarily a technology problem. The technology’s limitations are documented. The problem is institutional: a failure to treat documented bias as a disqualifying concern, a failure to require independent evidence before acting on algorithmic outputs, and a failure to disclose the technology’s use in ways that allow meaningful legal challenge.
The path forward involves parallel interventions:
Standards, not bans. Blanket prohibitions on facial recognition are one option, but the more durable approach is mandatory evidentiary standards: facial recognition results are investigative leads, never probable cause on their own. This already reflects the logic of the Williams settlement and Maryland law.
Mandatory disclosure. Defendants and defense attorneys must be informed when facial recognition played any role in identification. Without disclosure, wrongful convictions based on faulty matches cannot be challenged.
Independent auditing. Systems deployed by law enforcement should be subject to ongoing accuracy audits—using demographically representative samples, on surveillance-quality footage, with results published. Not vendor accuracy claims. Not controlled lab tests.
Private sector accountability. Retail and private deployments of facial recognition require the same minimum standards as law enforcement use, particularly when those systems produce information that ends up triggering criminal investigations.
Angela Lipps spent six months in jail before the most basic investigative step was taken. The bank records that proved her innocence were available from the start. They weren’t checked because the algorithm said it was her—and that was enough.
That institutional logic, not the algorithm itself, is what keeps producing wrongful arrests.
Frequently Asked Questions
Q: How common are wrongful arrests from facial recognition? A: At least seven to eight wrongful arrests directly attributable to facial recognition errors have been publicly documented in the United States since 2019. Because disclosure is rarely required, the actual number is likely higher—a 2024 Washington Post investigation found police routinely do not reveal when facial recognition played a role in an arrest.
Q: Why does facial recognition perform worse on Black faces? A: Facial recognition algorithms learn from training data. Widely used training datasets are predominantly white—“Labeled Faces in the Wild,” for example, is 83.5% white. Algorithms trained on these datasets develop better pattern recognition for white faces and generate more false positives for underrepresented demographics. The 2019 NIST government study confirmed false positive rates vary by factors of 10 to 100 depending on the algorithm and demographic group.
Q: Is facial recognition evidence admissible in court? A: Standards vary by jurisdiction. Facial recognition results have been used to establish probable cause for arrest warrants without independent corroboration, despite no federal rules governing admissibility. Maryland now explicitly bars facial recognition as the sole basis for probable cause. Nationwide, there is no consistent legal framework governing what evidentiary weight these results should carry.
Q: What is the strongest existing legal protection against facial recognition misuse in policing? A: The Detroit Police Department’s post-settlement policy (from the Robert Williams case, June 2024) and Maryland’s 2024 state law represent the current high-water marks. Both prohibit arrests based solely on facial recognition results and require independent corroborating evidence. Neither is a federal standard.
Q: What should individuals do if they believe they were wrongfully targeted by facial recognition? A: Contact the ACLU or a civil rights attorney immediately. Document all details of the arrest. Request the full evidentiary basis for any warrant or identification—specifically whether facial recognition was used. The Robert Williams case established that challenging the technology’s use is legally viable and can produce both exoneration and civil remedies.
Footnotes
-
InForum and Grand Forks Herald reporting on Angela Lipps, March 2026. https://www.inforum.com/news/fargo/ai-error-jails-innocent-grandmother-for-months-in-fargo-case ↩
-
Diversity analysis of “Labeled Faces in the Wild” dataset, cited in Harvard Journal of Law & Technology racial bias analysis. https://jolt.law.harvard.edu/digest/why-racial-bias-is-prevalent-in-facial-recognition-technology ↩
-
Buolamwini, J. & Gebru, T. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” Proceedings of Machine Learning Research, 2018. https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf ↩
-
NIST FRVT Demographics Report (NISTIR 8280). National Institute of Standards and Technology, December 2019. https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf ↩
-
NPR reporting on Robert Williams case, June 2020. https://www.npr.org/2020/06/24/882683463/the-computer-got-it-wrong-how-facial-recognition-led-to-a-false-arrest-in-michig ↩
-
Washington Post investigation on undisclosed police facial recognition use, October 6, 2024. https://www.washingtonpost.com/business/2024/10/06/police-facial-recognition-secret-false-arrest/ ↩
-
ACLU press release on Williams v. City of Detroit settlement, June 28, 2024. https://www.aclu.org/press-releases/civil-rights-advocates-achieve-the-nations-strongest-police-department-policy-on-facial-recognition-technology ↩
-
Stateline reporting on state-level facial recognition legislation, February 2025. https://stateline.org/2025/02/04/facial-recognition-in-policing-is-getting-state-by-state-guardrails/ ↩
-
NBC News and Washington Post reporting on Harvey Murphy Jr. case, January 2024. https://www.nbcnews.com/news/us-news/man-says-ai-facial-recognition-software-falsely-idd-robbing-sunglass-h-rcna135627 ↩
-
ACLU v. Clearview AI case documentation. https://www.aclu.org/cases/aclu-v-clearview-ai ↩