The head of the Paris police, Laurent Nuñez, has said he is in favour of extending the use of controversial AI-powered video surveillance trialled during the Olympic and Paralympic Games. The system showed promising results, according to law enforcement, but has drawn criticism from rights groups over potential abuse of privacy.
Speaking before the French parliament's law committee this week, police prefect Nuñez said algorithmic video surveillance had "proved its usefulness".
Describing the results of the Olympic security experiment as positive, Nuñez said he wants to see AI surveillance extended to other sporting and cultural events across France.
A 2023 law passed for the Paris Games already authorises potential use of AI surveillance until 31 March 2025.
The technology uses artificial intelligence programmes to analyse images recorded by surveillance cameras.
As the system processes video footage, it automatically identifies "abnormal" events, such as a person falling in the street or movements in a crowd that suggest panic. It does not rely on facial recognition.
Rights implications
Despite law enforcement's assurances that its use will be limited, rights groups fear that AI surveillance could lead to serious abuses.
"When people know they are being watched, they tend to modify their behaviour, to censor themselves and perhaps not to exercise certain rights," Katia Roux, a specialist in technology and rights issues with Amnesty International, told RFI.
"Any surveillance in a public space is an interference with the right to privacy," she said.
"Under international law, it must be necessary and proportionate to a legitimate objective ... It is up to the authorities to demonstrate that there is no less intrusive way of guaranteeing security. This has not been demonstrated."
Another criticism concerns the artificial intelligence on which algorithmic video surveillance is based. The technology has been developed with data that potentially contains discriminatory biases, which it in turn may amplify.
"In other countries that have developed this type of surveillance of public spaces, we see it being used to disproportionately target certain groups of the population that are already marginalised," Roux said.
'Foot in the door'
Above all, civil liberties organisations fear that experimenting with algorithmic video surveillance will pave the way for more intrusive forms of use.
Security analysts say it's a "foot in the door" that heralds more problematic applications of AI – such as facial recognition – in the near future.
In 2012, the London Olympics saw the massive deployment of surveillance cameras in the streets of the UK capital.
Six years later, the Football World Cup in Russia provided an opportunity to experiment with facial recognition, which is still in place today.
The French government is due to submit a report on the use of AI video surveillance to parliament by the end of this year.