Meta, formerly known as Facebook, has been diligently working to prevent a recurrence of the misinformation debacle that plagued the 2016 election. With the upcoming EU Parliament elections scheduled from June 6 to 9, Meta has unveiled a comprehensive plan to safeguard election integrity.
Meta's strategy, as outlined in a recent statement, revolves around three primary areas: combating misinformation, thwarting influence operations, and addressing generative AI abuse. To combat misinformation, Meta has collaborated with 26 fact-checking organizations and regularly publishes threat findings reports. The company is also refining its approach to generative AI, a technology that can rapidly produce content for political campaigns.
As generative AI technology advances, the risk of disseminating disinformation grows. Instances of deepfakes impersonating prominent political figures have raised concerns about the spread of fake news. Meta has taken proactive steps to address this issue by partnering with fact-checking entities to review AI-generated content and implementing measures to rank down deceptive content on users' feeds.
To enhance transparency, Meta is developing tools to label AI-generated content and plans to introduce a feature that allows users to disclose whether content includes AI-generated video or audio. Advertisers will also be required to disclose the use of AI in creating their content and display a 'paid for by' disclaimer on their ads.
Moreover, Meta has established an Ad Library that provides insights into running ads, including details on targeting and expenditure. Advertisers on Meta must undergo a verification process to confirm their identity and location within the EU. Failure to comply with these guidelines may result in consequences for both users and advertisers.
Meta's concerted efforts to fortify election integrity ahead of the EU Parliament elections underscore the company's commitment to combatting misinformation and ensuring a fair electoral process.