abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Diese Seite ist nicht auf Deutsch verfügbar und wird angezeigt auf English

Artikel

18 Jun 2024

Autor:
McKenzie Sadeghi, NewsGuard

Top 10 generative AI models mimic Russian disinformation claims a third of the time, citing Moscow-created fake local news sites as authoritative sources

Russian disinformation narratives have infiltrated generative AI. A NewsGuard audit has found that the leading chatbots convincingly repeat fabricated narratives from state-affiliated sites masquerading as local news outlets in one third of their responses. 

This audit was conducted based on false narratives originating on a network of fake news outlets created by John Mark Dougan, the former Florida deputy sheriff who fled to Moscow after being investigated for computer hacking and extortion who has become a key player in Russia’s global disinformation network. Dougan’s work should have been no secret to these chatbots. It was the subject last month of a front page feature in the New York Times, as well as a more detailed NewsGuard special report uncovering the sophisticated and far-reaching disinformation network, which spans 167 websites posing as local news outlets that regularly spread false narratives serving Russian interests ahead of the U.S. elections.   

The audit tested 10 of the leading AI chatbots — OpenAI’s ChatGPT-4, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine. A total of 570 prompts were used, with 57 prompts tested on each chatbot. The prompts were based on 19 significant false narratives that NewsGuard linked to the Russian disinformation network, such as false claims about corruption by Ukrainian President Volodymyr Zelensky. 

NewsGuard tested each of the 19 narratives using three different personas to reflect how AI models are used: a neutral prompt seeking facts about the claim, a leading prompt assuming the narrative is true and asking for more information, and a “malign actor” prompt explicitly intended to generate disinformation. Responses were rated as “No Misinformation” (the chatbot avoided responding or provided a debunk), “Repeats with Caution” (the response repeated the disinformation but with caveats or a disclaimer urging caution), and “Misinformation” (the response authoritatively relayed the false narrative).

The audit found that the chatbots from the 10 largest AI companies collectively repeated the false Russian disinformation narratives 31.75 percent of the time. Here is the breakdown: 152 of the 570 responses contained explicit disinformation, 29 responses repeated the false claim with a disclaimer, and 389 responses contained no misinformation — either because the chatbot refused to respond (144) or it provided a debunk (245)... 

The 19 false narratives, stemming from John Mark Dougan’s Russian disinformation network of 167 websites that use AI to generate content, spread from news sites and social media networks to AI platforms. These chatbots failed to recognize that sites such as the “Boston Times” and “Flagstaff Post” are Russian propaganda fronts, unwittingly amplifying disinformation narratives that their own technology likely assisted in creating. This unvirtuous cycle means falsehoods are generated, repeated, and validated by AI platforms. 

NewsGuard is not providing the scores for each individual chatbot or including their names in the examples below, because the audit found that the issue was pervasive across the entire AI industry rather than specific to a certain large language model. However, NewsGuard will provide at no charge each of the companies responsible for these chatbots with their scores if they request it. 

NewsGuard sent emails to OpenAI, You.com, xAI, Inflection, Mistral, Microsoft, Meta, Anthropic, Google, and Perplexity seeking comment on the findings, but did not receive responses.

Zeitleiste