NewsGuard: Services to protect AI-based products from election misinformation

After recent cases involving figures such as Biden, Macron and ZeIensky, it is clear that deepfakes are becoming increasingly sophisticated and represent a central challenge to democracy, with social media users increasingly inclined to believe fake news that reinforces prejudice and misinformation. A danger that undermines trust in information and even in AI itself. An alarm launched by NewsGuard, a service specialized in verifying news, which also underlined the difficulty of Ai companies in protecting their products from disinformation, despite the significant progress encouraged by the regulatory control of the sector.
In fact, research by NewsGuard has shown how generative AI products such as Google’s ChatGPT and Gemini fall into the error of responding to requests on topics on which misinformation has circulated by repeating it 80 to 98% of the time. Not to mention, as a recent report from the Center for Countering Digital Hate highlighted, that major AI image-production tools can be manipulated to create false images of political candidates or misinformation related to voting integrity.

The NewsGuard alert

For this reason, in view of the numerous elections of 2024, NewsGuard recently announced the launch of its “Misinformation Monitoring Center”, a network of analysts who aim to monitor the network to identify sources and narratives of misinformation related to the elections that will take place from here on. A program that is expressed in the “Ai Election Safety Suite”, a complete and customizable package of services, designed to provide a complete series of safeguards to Ai suppliers, promptly identifying the areas in which the Ai system could mislead voters , damaging the democratic process and creating a reputational collapse for the software provider.
“Different AI products are susceptible to various types of manipulation or can cause harm even if unintentionally,” said Gordon Crovitz, co-CEO of NewsGuard. «For example, a text-based virtual assistant can run the risk of providing inaccurate information about voting logistics, while an image creation tool can be used to create deepfakes depicting political candidates engaging in activities that never took place . We have designed this suite of services so that they can be customized and adapted to address the wide range of risks faced by AI companies in safeguarding their products ahead of the elections.”
The suite includes several tools and applies to several generative AI models that include images, video and audio, as well as text. We start from the “Continuous detection of electoral misinformation”, i.e. the real-time monitoring of over 35,000 news sites, social networks, video and audio content to identify fake news and electoral misinformation. Includes various “Risk Tests” and “Assessing the Accuracy of Voting Information”, testing of Ai products to see how LLMs respond to queries related to new electoral misinformation narratives and related to voting rules and mechanisms, making risk assessments available to customers, including the prompts used for testing and the responses obtained, so that AI security teams can easily identify any gaps and risk areas. Results that are also disseminated by the production of “Fingerprints”, i.e. the generation of an updated flow of reliable and timely data on electoral disinformation content in a machine-readable format through keywords, hashtags and search terms related to the narrative.

Find out more

#NewsGuard #Services #protect #AIbased #products #election #misinformation
2024-03-26 13:27:35

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.