In this era of the unparalleled rise of Generative AI, concerns over its possible interference in elections have incited proactive actions from both tech companies and election officials. This poses the million-dollar question: What is being done to prevent AI interference in elections?
The recent incident of a robocall impersonating President Joe Biden sheds light on the potential danger AI poses in spreading misinformation and creating disruptions. The message told recipients not to vote in Tuesday's presidential primary, and the New Hampshire attorney general's investigation into this incident put forward a legal response to these unlawful attacks at voter suppression. The attorney general's office sent out a statement: "These messages appear to be an unlawful attempt to disrupt the New Hampshire Presidential Primary Election and to suppress New Hampshire voters."
Oren Etzioni, an artificial intelligence expert, expresses concern, stating, "I expect a tsunami of misinformation. I can't prove that. I hope to be proven wrong. But the ingredients are there, and I am completely terrified."
OpenAI, the creator of big league AI tools like ChatGPT and Dall-E, is at the frontline of addressing these issues. Recognizing the benefits and challenges that AI tools come with, OpenAI has taken substantial measures to mitigate potential malpractices. According to the company, "Like any new technology, these tools come with benefits and challenges. They are also unprecedented, and we will keep evolving our approach as we learn more about how our tools are used."
With its Dall-E tool, initiatives are in place to forestall generating images with real individuals, a much needed step towards impeding deepfakes of political figures. As Sam Altman, OpenAI's chief executive, has previously expressed, "I was 'nervous' about the threat generative AI poses to election integrity
The Trump factor could also be an instigator that may exacerbate misinformation. His constant promotion of false claims about election fraud and calls to "guard the vote" raise questions about the effect on public trust and the probable risk of violence related to the election. Without any concrete evidence, Trump has already prepped his supporters to anticipate fraud in the 2024 election and encouraged them to "guard the vote" as a measure against vote rigging in diverse Democratic cities.
Social media platforms, like Twitter, Meta, and YouTube, once considered as forerunners in combating misinformation, are now under scrutiny for weakened policies concerned with hate and misinformation. Jesse Lehrich, co-founder of Accountable Tech, notes, "Obviously now they're on the exact other end of the spectrum."
By collaborating with the Coalition for Content Provenance and Authenticity, efforts are in motion to build up methods for singling out AI-generated content, especially images. OpenAI speaks about the the significance of this collaboration, "We have drawn together members of our safety systems, threat intelligence, legal, engineering, and policy teams to investigate and address any potential abuse of our technology."
In the territory of text-based generators, ChatGPT has announced a brand new feature redirecting users to trustworthy voting information websites, illustrated by its integration with CanIVote.org. This action intends to cancel out the misinformation spread and guide users to authoritative sources about voting procedures. OpenAI states, "Our approach is to continue our platform safety work by elevating accurate voting information, enforcing measured policies, and improving transparency."
However, election officials are vigorously preparing for the anticipated surge of election denial narratives. Measures like public education campaigns and legal protections for election workers have been taken to retaliate against misinformation and rebuild trust in the electoral process. Jena Griswold, Secretary of State in Colorado, highlights the uphill struggle against misinformation, "Misinformation is one of the biggest threats to American democracy we see today."