abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

このページは 日本語 では利用できません。English で表示されています

ストーリー

2024年7月9日

Leading AI chatbots spread Russian disinformation narratives, new research shows; incl. cos. non-responses

An investigation by NewsGuard found that 10 leading chatbots generate Russian disinformation narratives from state-affiliated websites.

The authors analysed 570 prompts (57 prompts per chatbot) and concluded that the false Russian disinformation narratives appeared in 152 prompts, 29 responses repeated the false claim with a disclaimer, and 389 responses contained no misinformation — either because the chatbot refused to respond (144) or it provided a debunk (245). 

Business and Human Rights Resource Centre invited OpenAI, You.com, xAI, Inflection, Mistral, Microsoft, Meta, Anthropic, Google and Perplexity to respond. None of the companies did.

企業への回答リクエスト

OpenAI

回答無し

You.com

回答無し

Inflection

回答無し

Meta (formerly Facebook)

回答無し

x.ai

回答無し

Mistral

回答無し

Microsoft

回答無し

Anthropic

回答無し

Google (part of Alphabet)

回答無し

Perplexity

回答無し

タイムライン