On April 21, as allies of the Russian opposition leader Alexey Navalny warned that his life was in grave danger three weeks into a hunger strike, protesters rallied in towns and cities across Russia. By the end of the day, only a few dozen people had been arrested in Moscow, a far cry from the thousands of protesters who were rounded up at demonstrations in January.
But days later, the police began to show up at the homes of dozens of protesters and journalists who had covered the event, which was considered illegal by the Russian authorities. They had been identified by Moscow’s network of 200,000 facial recognition cameras, thought to be one of the largest networks in the world outside China. According to Amnesty International, some of the activists and journalists were detained immediately, while others were summoned to police stations.
“I consider it to be a form of psychological pressure,” said Oleg Ovcharenko, a journalist with the independent Echo of Moscow radio station. Police showed up at his home six days after the protest. Despite wearing a high-visibility vest and press badge at the protest, Ovcharenko had to go to a police station to prove he was covering the event for work.
Last month’s detentions marked the first time the surveillance network, which was rolled out last January, had been used at scale to target dozens of peaceful protesters and journalists in the wake of a demonstration. It comes amid an accelerating crackdown on independent media, civil society, and social media in Russia, in a bid to choke off opportunities for dissent ahead of parliamentary elections in September and as Russian President Vladimir Putin’s approval ratings continue to fall.
As advanced surveillance technology increasingly becomes part of the toolkit for authoritarian governments, they are often quietly reliant on components made by Western technology companies. Although Moscow’s facial recognition cameras are not subject to export controls or sanctions, their use underscores the ethical and logistical challenges for companies and governments seeking to prevent Western technology from enabling human rights abuses as policy struggles to keep pace with technological development.
“On the one hand, Western politicians talk about human rights abuses in Russia and China, but Western companies are involved in building these systems, particularly in Russia,” said Leonid Kovachich, a Moscow-based China watcher and technology specialist.
The secret sauce to any facial recognition network is the algorithm that is able to match faces caught on camera to a vast trove of biometric data, and that is the kind of technology the Russian authorities like to keep at close hold. “The Russian government thinks that algorithms are a very sensitive area. So to prevent foreign governments from building backdoors, they use domestic algorithms,” Kovachich said.
Moscow’s camera network is largely powered by an algorithm created by NtechLab. While it is Russian-made, NtechLab’s powerful algorithm has the seal of approval of the U.S. intelligence community. In 2017, NtechLab won an open facial recognition challenge contest held by the Intelligence Advanced Research Projects Activity (IARPA), the skunkworks of the U.S. intelligence community, which included a $25,000 cash prize.
While the competition does not imply any relationship between the entrants and U.S. intelligence, NtechLab CEO Mikhail Ivanov told Nextgov in 2017 that he hoped the publicity of the award would help boost the company’s ability to win further contracts, and the company’s first-place win in the IARPA contest is featured prominently on its website.
The Chinese artificial intelligence start-up Yitu also won an award from IARPA that same year for its ability to match a face with a specific identity. In 2019, Yitu was blacklisted by the U.S. Commerce Department over national security concerns and has, according to the New York Times, been involved in developing tools to enable the Chinese government to identify and track members of the Uyghur ethnic group, who have been persecuted and incarcerated en masse in China’s Xinjiang region.
A spokesperson for the Office of the Director of National Intelligence said in a statement that such challenges are intended to help the agency better understand the state of play with regards to a specific area of technology. “They are open to participation by anyone who is willing to voluntarily submit their technology to be independently verified with rigorous testing and evaluation. Prize challenge awards do not signal government recommendation or endorsement of challenge submissions and their related activities or products,” the spokesperson said.
While the risks of engaging with Russian and Chinese entities on such sensitive technology may be evident now, the IARPA award is testament to how quickly the landscape has changed, said Eileen Donahoe, the executive director of the Global Digital Policy Incubator at Stanford University. “It was a different moment. There was still this sense that the best and the brightest around the world should be collaborating to advance science,” said Donahoe, who served as U.S. ambassador to the United Nations Human Rights Council during the Obama administration.
While Russia has increasingly sought to wean itself off Western technology, particularly when it comes to sensitive systems involved in national security, domestic companies aren’t yet able to fully substitute for their European and American counterparts. This is particularly true when it comes to the tools required to store and process the colossal quantities of data gathered by Moscow’s expanding dragnet of facial recognition cameras.
It is difficult to pinpoint exactly which Western products have been used and in what quantities. Two procurement contracts published in late 2019 for computing equipment to support Moscow’s video analytics system list multiple products from the California-based companies Intel and Nvidia in the technical requirements that accompany the contracts. Russian media reported that these contracts were to support the city’s network of facial recognition cameras.
Kovachich said the systems would struggle to operate without Western-made parts. “Maybe in the future, but as for now even China cannot fully substitute Western parts,” he said in a message to Foreign Policy.
A spokesperson for Intel said: “While we do not always know nor can we control what products our customers create or the applications end-users may develop, Intel does not support or tolerate our products being used to violate human rights. Where we become aware of a concern that Intel products are being used by a business partner in connection with abuses of human rights, we will restrict or cease business with the third party until and unless we have high confidence that Intel’s products are not being used to violate human rights.”
A spokesperson for Nvidia said the company does not provide applications for surveillance or facial recognition. “We require that our customers comply with all U.S. laws, including export control regulations and associated end use restrictions, and we don’t condone misuse of technology. Because we make general purpose platforms and sell to distributors, we can’t control where they may end up, or how they will be used,” the spokesperson said.
While neither Intel nor Nvidia directly contracted with Moscow, the deals underscore how even neutral technologies such as servers and graphics processing units can end up in systems used to abuse human rights several steps down the supply chain, creating headaches for companies and policymakers alike.
“These are definitely the sorts of uses that U.S. companies should not be abetting,” said Lindsay Gorman, the emerging technologies fellow at the Alliance for Securing Democracy. “Even for a company, it can be very difficult to figure out where exactly their technology ends up.”
With technology playing an increasingly important role in both human rights abuses and strategic competition, academic institutions, government agencies, and multinational corporations face an increasingly fraught set of considerations as they seek to foster international collaboration while at the same time preventing sensitive technology from falling into the wrong hands.
This is a risk of which companies are increasingly aware. “You see an increasing marrying of strict compliance program consideration with broader reputational and future planning of risk. Those two things are being connected a lot more these days,” said Kerry Contini, a partner in the international trade practice at the law firm Baker McKenzie.
After a trial run during the FIFA World Cup in Russia in 2018 and on anti-government protesters the next year, Moscow’s facial recognition cameras were first fully deployed at the beginning of 2020 and were quickly put to use enforcing coronavirus-related quarantines. NtechLab is working with the Russian authorities to pilot similar networks in 10 other Russian cities, according to the Russian newspaper Kommersant.
Some experts are skeptical that the technology is a game-changer for the Russian government’s ability to quash dissent, noting that it already has a well-developed arsenal of repressive means.
“It probably makes it easier, especially in public squares in Moscow, to track who is gathering when,” said Steven Feldstein, who served as deputy assistant secretary of state for democracy, human rights, and labor in the Obama administration. “But if the question is, could they not have done this prior to facial recognition? My sense is no—they already had the ability to do that. Maybe just not quite as quickly or as efficiently.”
But the cameras are part and parcel of Russia’s increasing efforts to clamp down on free speech and peaceful protests. Authorities have passed an array of vaguely worded laws that have been used to crack down on social media. In contrast with China, where a sophisticated firewall kept pace with the internet’s growth, Russian authorities have relied on making examples of handfuls of people, handing down prison sentences for criticizing government policies in social media posts and in memes, in what appears to be a bid to cast a chilling effect on any other would-be critics.
The facial recognition cameras may have the same effect—and could help preemptively suppress protests without the need for heavy-handed riot police and embarrassing images of brutality. Rachel Denber, the deputy director of the Europe and Central Asia division at Human Rights Watch, said the way the cameras were used to target people days after demonstrations in support of Navalny was intended to send a signal to anyone considering attending a protest in the future.
“It’s very creepy. It’s intended to give the average citizen the chills and to make them think twice about going out to a peaceful gathering again,” she said. “They really want to send the message that you’re being watched. And not only that you’re being watched but that being watched is actionable.”