Hong Kong police have launched an investigation after an employee at an unnamed company claimed she was duped into paying HK$200m (£20m) of her firm’s money to fraudsters in a deepfake video conference call.
The Hong Kong police force said it had received a report from a worker that she had been tricked into transferring the money by someone “posing as senior officers of the company”.
“Police received a report from a staff member of a company on 29 January that her company was deceived of some HK$200m after she received video conference calls from someone posing as senior officers of the company requesting to transfer money to designated bank accounts,” said police in a statement.
The force added that after an initial investigation the case had been classified as “obtaining property by deception” and was being handled by its cybercrime unit. No arrests have been made so far and investigations are continuing.
Hong Kong’s public broadcaster, RTHK, reported that the employee was a clerk working for an unnamed multinational firm. It quoted acting senior superintendent Baron Chan as speculating that the fraudster used artificial intelligence to dupe the worker.
“[The fraudster] invited the informant [clerk] to a video conference that would have many participants. Because the people in the video conference looked like the real people, the informant … made 15 transactions as instructed to five local bank accounts, which came to a total of HK$200m,” he said.
“I believe the fraudster downloaded videos in advance and then used artificial intelligence to add fake voices to use in the video conference.”
RTHK added that the worker received a message from the company’s chief financial officer that talked of the need for confidential transactions. It was only after going on the call and sending the money that the employee spoke to the company’s head office and realised it was a scam, reported RTHK.
“We can see from this case that fraudsters are able to use AI technology in online meetings, so people must be vigilant even in meetings with lots of participants,” said Chan.
AI-generated deepfakes are proliferating online, with social media platform X being forced to suspend Taylor Swift-related searches last month after fake sexually explicit images of the pop singer flooded its site. A fake version of US president Joe Biden’s voice was also used in robocalls to voters in the New Hampshire primary last month.
The UK’s cybersecurity agency warned in January that AI was making it increasingly difficult to identify phishing messages – where users are tricked into handing over passwords or personal details.