Overview
Clearview AI is a facial recognition technology company founded in 2017 by Hoan Ton-That, an Australian-born programmer, and Richard Schwartz, a former aide to New York Mayor Rudy Giuliani. The company operates the world's largest known facial recognition database, built by scraping billions of photographs from social media platforms, news sites, mugshot databases, and other publicly accessible internet sources without the knowledge or consent of the individuals depicted. Headquartered in New York City, Clearview AI has become the most prominent example of how facial recognition technology threatens fundamental privacy and civil liberties.
Founding and Early Growth
Clearview AI was developed with early backing from Peter Thiel, who provided initial seed funding. The company grew in secrecy, with Ton-That and Schwartz building relationships with individual law enforcement officers through personal outreach and free trial accounts. Early legal counsel was provided by Tor Ekeland, a New York attorney known for defending hackers, though he later distanced himself from the company.
The company operated almost entirely in the dark until January 2020, when New York Times journalist Kashmir Hill published a landmark investigation exposing Clearview's existence, its massive scraped database, and its distribution to hundreds of law enforcement agencies. The article, titled "The Secretive Company That Might End Privacy as We Know It", triggered a cascade of legal challenges, regulatory actions, and bans that continue to shape facial recognition policy worldwide.
How Clearview Works
Clearview's system allows users to upload a photograph of any person and instantly receive matching results from its database, along with links to the original web sources where those images appeared. This effectively ends anonymity in public spaces for anyone whose photograph has ever appeared online. The system functions as a reverse search engine for faces, connecting a photograph to a person's online identity, social media profiles, and other personal information.
Data Collection Practices
Clearview AI's data collection practices are unprecedented in their scope and disregard for consent.
Mass Web Scraping
Clearview has scraped billions of photographs from virtually every major online platform:
- Facebook and Instagram (Meta's platforms)
- Twitter/X
- YouTube (Google)
- LinkedIn (Microsoft)
- Venmo (payment app with public transaction photos)
- Employment websites and professional directories
- News sites and media archives
- Mugshot databases and public records
All scraping was done in explicit violation of these platforms' terms of service and without any notice to or consent from the individuals depicted. Meta, Google, Twitter, LinkedIn, and YouTube all sent cease-and-desist letters demanding Clearview stop scraping their platforms.
Database Scale
The database has grown dramatically since exposure:
- 2020 (at time of NYT exposure): Approximately 3 billion images
- 2022: Over 20 billion images
- 2024: Over 40 billion images
- Stated goal: 100 billion images (approximately 12 images for every person on Earth)
The database includes photographs of children, deceased individuals, and people who have never consented to any form of facial recognition processing.
Biometric Extraction
Each scraped photograph is processed to generate a unique mathematical representation (faceprint) of the depicted individual's facial geometry. These faceprints constitute biometric data under multiple privacy laws including BIPA (Illinois), GDPR (EU), and PIPEDA (Canada). Biometric data is uniquely sensitive because, unlike passwords, facial geometry cannot be changed if compromised.
Ongoing Collection
Clearview continuously scrapes new images, keeping its database current and expanding. Even individuals who delete their social media accounts remain in Clearview's database from previously scraped images. There is no effective opt-out mechanism for individuals who wish to be removed.
Known Clients & Government Contracts
Clearview AI's client base is predominantly law enforcement, though the company initially marketed broadly before regulatory pressure curtailed commercial sales.
U.S. Federal Law Enforcement
- FBI: Used Clearview for criminal investigations and national security cases
- Department of Homeland Security: Contracts for immigration enforcement and border security
- ICE / CBP: Used for identifying undocumented immigrants and at border crossings
- U.S. Secret Service, DEA, and other federal agencies: Various investigative uses
U.S. State and Local Law Enforcement
Over 600 law enforcement agencies across the United States have used Clearview AI. Many departments began through free trial accounts distributed directly to individual officers, bypassing official procurement processes and departmental oversight. Early adopters included:
- Madison County, Alabama Sheriff's Office: One of the earliest documented users, profiled in the NYT investigation
- NYPD: Used despite the department's official claim of no formal contract
- Chicago Police Department: Ran thousands of searches
- Police departments from major metro areas to small towns across virtually every state
International Deployment
- Australia: Used by Australian Federal Police before regulatory intervention
- United Kingdom: Used by police forces before ICO enforcement
- Canada: Used by RCMP and local police before Privacy Commissioner findings
- Ukraine: In 2022, Clearview offered its technology to the Ukrainian government during Russia's invasion for identifying deceased soldiers, reuniting refugees, and identifying infiltrators. Presented as humanitarian but raised concerns about wartime facial recognition precedent.
Pre-Controversy Private Sector Use
Before regulatory pressure restricted sales, Clearview marketed to private companies:
- Macy's: Retail loss prevention
- Walmart: Security and shoplifting identification
- Bank of America and other financial institutions: Identity verification
- Madison Square Garden: Used facial recognition to identify and ban attorneys involved in litigation against MSG Entertainment
The ACLU settlement subsequently banned most private-sector sales within the United States.
Privacy Incidents & Litigation
Clearview AI has faced a historically unprecedented wave of regulatory enforcement across six continents.
New York Times Exposure (January 2020)
Kashmir Hill's investigation revealed Clearview AI's existence to the public for the first time, including:
- The database of billions of scraped images
- Distribution to hundreds of law enforcement agencies
- The founders' identities and Peter Thiel's backing
- The company's strategy of operating in total secrecy
Data Breach (February 2020)
Clearview's entire client list was stolen in a security breach just weeks after public exposure. The breach revealed that over 2,200 law enforcement agencies, government organizations, and private companies had been given access. The stolen data also included the number of searches each client had conducted, exposing the scale of facial recognition surveillance.
ACLU Lawsuit and Landmark Settlement (2020-2022)
The ACLU filed suit under the Illinois Biometric Information Privacy Act (BIPA), resulting in a landmark settlement that:
- Permanently banned Clearview from selling its database to private companies in the United States
- Restricted government sales (with exceptions for federal law enforcement)
- Required Clearview to maintain an opt-out mechanism for Illinois residents
- Set a nationwide precedent for biometric privacy enforcement
Canada Privacy Commissioner (2021)
The Office of the Privacy Commissioner of Canada found that Clearview AI:
- Violated PIPEDA (Personal Information Protection and Electronic Documents Act)
- Collected and used personal information without consent
- Collected information for inappropriate purposes
The Commissioner ordered Clearview to cease collecting Canadians' facial images and destroy existing Canadian data. Clearview withdrew from the Canadian market but disputed whether the order was binding.
Australia OAIC Order (2021)
The Australian Information Commissioner found Clearview AI violated the Privacy Act by:
- Collecting sensitive biometric information without consent
- Not taking reasonable steps to notify individuals of data collection
- Not adequately ensuring data quality
The Commissioner ordered Clearview to cease collecting Australians' data and destroy existing records within 90 days.
UK ICO Fine, GBP 7.55 Million (2022)
The UK Information Commissioner's Office fined Clearview AI GBP 7.55 million for:
- Processing UK residents' data without a lawful basis
- Failing to have a process to stop data being retained indefinitely
- Failing to meet data protection standards for biometric data
- Collecting data without informing individuals
The ICO ordered Clearview to stop processing UK residents' data and delete existing records.
France CNIL Fine, EUR 20 Million (2022)
France's data protection authority fined Clearview AI EUR 20 million for:
- Unlawful collection of biometric data without consent
- Failure to respect individuals' data access rights
- Violation of GDPR data minimization principles
CNIL ordered Clearview to cease operations in France and delete French residents' data within two months.
Italy Garante Fine, EUR 20 Million (2022)
Italy's data protection authority imposed a EUR 20 million fine for GDPR violations related to biometric data processing without legal basis, consent, or adequate transparency.
Greece DPA Fine, EUR 20 Million (2022)
The Greek data protection authority fined Clearview EUR 20 million and ordered data deletion, joining the coordinated European regulatory response.
Sweden DPA Fine (2022)
Sweden's Integritetsskyddsmyndigheten (IMY) found that Clearview's processing of facial images violated GDPR and ordered corrective measures.
Vermont AG Lawsuit (2020)
Vermont became the first U.S. state to sue Clearview AI, alleging violations of the state's Consumer Protection Act and data broker registration requirements. The lawsuit was significant as a test case for state-level enforcement against facial recognition companies.
Multiple State Investigations
State attorneys general in New York, New Jersey, Virginia, and California have investigated or taken action against Clearview AI's practices, contributing to a patchwork of enforcement that has constrained the company's domestic operations.
Threat Score Analysis
Clearview AI receives a composite threat score of 90/100, reflecting its fundamental threat to public anonymity and privacy:
-
Data Collection (95/100): Clearview's mass scraping of 40+ billion facial images without consent represents one of the most invasive data collection operations ever undertaken by a private company. The creation of biometric profiles for billions of people who never interacted with or consented to the company's activities is without precedent. The database includes children, the deceased, and individuals from countries where Clearview has been banned.
-
Third-Party Sharing (90/100): Clearview provided facial recognition access to over 2,200 organizations before regulatory intervention. The distribution model, free trial accounts given to individual officers, enabled rapid, uncontrolled proliferation with minimal oversight. Officers could run searches without departmental approval, warrants, or audit trails.
-
Breach History (80/100): The 2020 breach of Clearview's entire client list demonstrated significant security failures. The inherent vulnerability of centralizing 40+ billion biometric records creates catastrophic risk, unlike passwords, faces cannot be changed if the biometric database is compromised.
-
Government Contracts (85/100): Over 600 law enforcement agencies used Clearview for real-time identification. Documented deployment contexts include protests, immigration enforcement, routine policing, and wartime applications (Ukraine). The technology enables mass surveillance of public spaces by any officer with a smartphone.
-
Transparency (15/100): Clearview operated in complete secrecy for years, scraped images in violation of every major platform's terms of service, and initially responded to exposure with denials and legal threats against journalists. The company's early attempts to hide its founders' identities and the nature of its technology represent a fundamental commitment to opacity.
Weighted calculation: (95 * 0.25) + (90 * 0.25) + (80 * 0.20) + (85 * 0.15) + (15 * 0.15) = 23.75 + 22.5 + 16 + 12.75 + 2.25 = 77.25, adjusted to 90 due to the unprecedented nature of building biometric profiles of billions of non-consenting individuals.
Transparency & Accountability
Clearview AI's transparency and accountability record is abysmal. The company operated for years without public knowledge, built its database by violating the terms of service of every major social media platform, and initially responded to exposure with denials and legal threats against journalists and researchers.
Pattern of Concealment
Early in the company's history, Ton-That and Schwartz took deliberate steps to conceal both the company's existence and their own involvement. The company used shell entities and avoided public-facing marketing. When Kashmir Hill began investigating, Clearview attempted to identify her sources within law enforcement.
Regulatory Non-Compliance
Despite cumulative fines exceeding EUR 60 million from European regulators alone, and deletion orders from data protection authorities in the UK, France, Italy, Greece, Australia, and Canada, Clearview AI has largely contested or ignored these orders. The company has argued:
- That its scraping of publicly available images is protected by the First Amendment
- That foreign data protection authorities lack jurisdiction over a U.S. company
- That facial recognition is equivalent to Google's indexing of web content
Courts and regulators have largely rejected these arguments.
Pivot to Law-Enforcement-Only Messaging
Following the ACLU settlement and regulatory pressure, Clearview pivoted from broad commercial marketing to positioning itself exclusively as a law enforcement tool. This messaging shift was strategic rather than principled, the company marketed to private companies until forced to stop, and continues to seek new markets internationally.
Fundamental Unresolved Issue
The core problem remains: 40+ billion biometric records exist in a private company's database without meaningful consent or oversight. No regulatory action has successfully compelled Clearview to delete its entire database. The technology cannot be uninvented, and the precedent of mass biometric surveillance without consent has been established. Clearview's persistence demonstrates the limitations of existing privacy enforcement mechanisms when confronted with a determined bad actor operating across jurisdictions.