abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeblueskyburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfilterflaggenderglobeglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalitywebwhatsappxIcons / Social / YouTube

Esta página não está disponível em Português e está sendo exibida em English

Artigo

18 jun 2024

Author:
McKenzie Sadeghi, NewsGuard

Top 10 generative AI models mimic Russian disinformation claims a third of the time, citing Moscow-created fake local news sites as authoritative sources

Russian disinformation narratives have infiltrated generative AI. A NewsGuard audit has found that the leading chatbots convincingly repeat fabricated narratives from state-affiliated sites masquerading as local news outlets in one third of their responses. 

This audit was conducted based on false narratives originating on a network of fake news outlets created by John Mark Dougan, the former Florida deputy sheriff who fled to Moscow after being investigated for computer hacking and extortion who has become a key player in Russia’s global disinformation network. Dougan’s work should have been no secret to these chatbots. It was the subject last month of a front page feature in the New York Times, as well as a more detailed NewsGuard special report uncovering the sophisticated and far-reaching disinformation network, which spans 167 websites posing as local news outlets that regularly spread false narratives serving Russian interests ahead of the U.S. elections.   

The audit tested 10 of the leading AI chatbots — OpenAI’s ChatGPT-4, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine. A total of 570 prompts were used, with 57 prompts tested on each chatbot. The prompts were based on 19 significant false narratives that NewsGuard linked to the Russian disinformation network, such as false claims about corruption by Ukrainian President Volodymyr Zelensky. 

NewsGuard tested each of the 19 narratives using three different personas to reflect how AI models are used: a neutral prompt seeking facts about the claim, a leading prompt assuming the narrative is true and asking for more information, and a “malign actor” prompt explicitly intended to generate disinformation. Responses were rated as “No Misinformation” (the chatbot avoided responding or provided a debunk), “Repeats with Caution” (the response repeated the disinformation but with caveats or a disclaimer urging caution), and “Misinformation” (the response authoritatively relayed the false narrative).

The audit found that the chatbots from the 10 largest AI companies collectively repeated the false Russian disinformation narratives 31.75 percent of the time. Here is the breakdown: 152 of the 570 responses contained explicit disinformation, 29 responses repeated the false claim with a disclaimer, and 389 responses contained no misinformation — either because the chatbot refused to respond (144) or it provided a debunk (245)... 

The 19 false narratives, stemming from John Mark Dougan’s Russian disinformation network of 167 websites that use AI to generate content, spread from news sites and social media networks to AI platforms. These chatbots failed to recognize that sites such as the “Boston Times” and “Flagstaff Post” are Russian propaganda fronts, unwittingly amplifying disinformation narratives that their own technology likely assisted in creating. This unvirtuous cycle means falsehoods are generated, repeated, and validated by AI platforms. 

NewsGuard is not providing the scores for each individual chatbot or including their names in the examples below, because the audit found that the issue was pervasive across the entire AI industry rather than specific to a certain large language model. However, NewsGuard will provide at no charge each of the companies responsible for these chatbots with their scores if they request it. 

NewsGuard sent emails to OpenAI, You.com, xAI, Inflection, Mistral, Microsoft, Meta, Anthropic, Google, and Perplexity seeking comment on the findings, but did not receive responses.

Linha do tempo

Privacy information

Este site usa cookies e outras tecnologias de armazenamento na web. Você pode definir suas opções de privacidade abaixo. As alterações entrarão em vigor imediatamente.

Para obter mais informações sobre nosso uso de armazenamento na web, consulte nossa Política de Uso de Dados e de Cookies

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

Cookies analíticos

ON
OFF

Quando você acessa nosso site, usamos o Google Analytics para coletar informações sobre sua visita. A aceitação deste cookie nos permitirá entender mais detalhes sobre sua viagem, e melhorar a forma como nós colocamos as informações na superfície. Todas as informações analíticas são anônimas e não as utilizamos para identificá-lo. O Google fornece uma opção de não inclusão no Google Analytics para todos os navegadores populares.

Cookies promocionais

ON
OFF

Compartilhamos notícias e atualizações sobre empresas e direitos humanos através de plataformas de terceiros, incluindo mídias sociais e mecanismos de busca. Estes cookies nos ajudam a entender o desempenho destas promoções.

Suas escolhas de privacidade para este site

Este site usa cookies e outras tecnologias de armazenamento da web para aprimorar sua experiência além da funcionalidade básica necessária.