Artificial intelligence is rewriting the playbook for crime, from cheap deepfake scams and AI-written ransomware to mass identity hijacks and critical-infrastructure hacks.
Why it matters: This new class of AI-supercharged crime is putting lives and financial systems at risk. But police training, laws and cross-border tools aren't keeping up, futurists tell Axios.
- Off-the-shelf AI lowers the skill level and cost of carrying out attacks, enabling small crews to execute schemes that previously required nation-state resources.
- Crimes can now hit millions at once with voice clones and account takeovers, while local agencies are trained and funded to chase one case at a time.
How it works: AI can create automations to "lock pick" into a system millions of times per second, something humans can't do, futurist Ian Khan tells Axios.
- Once inside, hackers can then use AI to steal identities, pump and dump stocks and cause havoc to water plants, smart homes and hospitals.
- The attacks can come from across the street to the other side of the world, said futurist Marc Goodman, author of "Future Crimes: Inside the Digital Underground and the Battle for Our Connected World."
- Deep fake voices can convince victims to hand over money, or stolen identities could lead to voter fraud, child pornography, and false arrests.
The purpose: anything from extorting money to causing pain to millions.
The latest: Chinese state-backed hackers used AI tools from Anthropic to automate breaches of major companies and foreign governments during a September cyber campaign, the company said Thursday.
- "We believe this is the first documented case of a large-scale cyberattack executed without substantial human intervention," the company said in a statement.
State of play: Deepfake fraud attempts surged 3,000% in 2023, per DeepStrike, a cybersecurity group.
- U.S. losses from fraud that relies on generative AI are projected to reach $40 billion by 2027, according to the Deloitte Center for Financial Services.
- Generative AI has increased the speed and scale of synthetic-identity fraud, especially across real-time payment rails, according to the Federal Reserve Bank of Boston.
- A deepfake attack occurred every five minutes globally in 2024, while digital-document forgeries jumped 244% year-over-year, the Entrust Cybersecurity Institute found in a report.
Zoom in: Beyond large-scale attacks, even petty AI crimes have local law enforcement on edge.
- Some AI-powered drones could collect data on the best places to bury bodies on less-traveled roads.
- Future robo-dogs could burglarize homes.
- Hacked cars may just drive off by themselves to chop shops, and AI systems could inform a would-be thief the best way to break into a car.
The bottom line: Few police academies are training cadets on spotting AI or computer crimes, both Khan and Goodman said.
- That's leaving enforcement to federal authorities, who then need international cooperation to stop worldwide syndicates.
- However, Miami Dade College this year announced it will be one of the first U.S. police academies to train cadets using a dedicated AI-assistant tool, in partnership with the AI company Truleo, the Miami New Times reports.
- The Police Executive Research Forum (PERF) has been encouraging agencies to develop policies and training as AI policing tools become more prevalent.