The UK’s data watchdog has fined a facial recognition company £7.5m for collecting images of people from social media platforms and the web to add to a global database.
The Information Commissioner’s Office (ICO) also ordered US-based Clearview AI to delete the data of UK residents from its systems. Clearview AI has collected more than 20bn images of people’s faces from Facebook, other social media companies and from scouring the web.
John Edwards, the UK information commissioner, said Clearview’s business model was unacceptable. “Clearview AI Inc has collected multiple images of people all over the world, including in the UK, from a variety of websites and social media platforms, creating a database with more than 20bn images,” he said.
“The company not only enables identification of those people, but effectively monitors their behaviour and offers it as a commercial service. That is unacceptable. That is why we have acted to protect people in the UK by both fining the company and issuing an enforcement notice.”
The ICO, which conducted the investigation in tandem with its Australian counterpart, the Office of the Australian Information Commissioner, had announced a “provisional” intention to fine Clearview AI £17m last November.
The ICO said on Monday it had reduced the fine after taking into consideration a number of factors including representations from the company. The £7.5m final sum is the third largest ever imposed by the ICO.
Announcing its provisional decision last year, the ICO said Clearview AI’s technology had been offered on a “free trial basis” to UK law enforcement agencies, although that trial has been discontinued.
Clearview AI’s services are no longer being offered in the UK – where previous clients included the Metropolitan police and the National Crime Agency – but the ICO said on Monday it still had customers abroad, so it was still using the data of UK residents.
The ICO did not disclose the number of UK facial images held by Clearview AI, but said the company had harvested a “substantial” amount of data.
Clearview AI customers can upload an image of a person to the company’s app, which is then checked against a database. The app then provides a list of images deemed similar to the photo provided by the customer, with a link to the websites where the images came from.
The ICO said Clearview AI broke UK data protection laws in several ways, including: failing to use information of UK residents in a fair and transparent way; failing to have a lawful reason for collecting that information; and failing to have a process in place to stop the data being retained indefinitely.
It said Clearview AI asked for additional information from people, including photos, when they contacted the company to ask if they were on the database. The ICO said this may have put off people who wished to object about their presence on the database.
Last week Clearview agreed to permanently stop selling access to its face database to private businesses or individuals around the US. The New York-based company will continue offering its services to federal agencies, such as US Immigration and Customs Enforcement, and to other law enforcement agencies and government contractors outside Illinois, where the lawsuit was brought.
Hoan Ton-That, Clearview AI’s chief executive, said: “I am deeply disappointed that the UK Information Commissioner has misinterpreted my technology and intentions … I would welcome the opportunity to engage in conversation with leaders and lawmakers so the true value of this technology, which has proven so essential to law enforcement, can continue to make communities safe.”