- Experts are urging an investigation into AI software used to propagate fake news for financial gain, following its role in disseminating misinformation after the Southport murders.
- The Alan Turing Institute's Centre for Emerging Technology and Security found that AI-generated content, monetised through digital ad networks, injected divisive falsehoods into public discourse.
- A report revealed that a website publishing false information post-murders used an AI service marketed for “passive income” and that AI was employed to repackage articles for credibility.
- Recommendations include Ofcom addressing this issue in its fraudulent advertising consultation and AI chatbots automatically flagging their fact-checking limitations during major incidents.
- The report also calls for the government to establish a crisis response plan for AI “information threats” and to issue fact-checking guidance to the public and educational institutions.
IN FULL