abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

這頁面沒有繁體中文版本,現以English顯示

故事

2024年7月9日

Leading AI chatbots spread Russian disinformation narratives, new research shows; incl. cos. non-responses

An investigation by NewsGuard found that 10 leading chatbots generate Russian disinformation narratives from state-affiliated websites.

The authors analysed 570 prompts (57 prompts per chatbot) and concluded that the false Russian disinformation narratives appeared in 152 prompts, 29 responses repeated the false claim with a disclaimer, and 389 responses contained no misinformation — either because the chatbot refused to respond (144) or it provided a debunk (245). 

Business and Human Rights Resource Centre invited OpenAI, You.com, xAI, Inflection, Mistral, Microsoft, Meta, Anthropic, Google and Perplexity to respond. None of the companies did.

企業回應

OpenAI

沒有回應

You.com

沒有回應

Inflection

沒有回應

Meta (formerly Facebook)

沒有回應

x.ai

沒有回應

Mistral

沒有回應

Microsoft

沒有回應

Anthropic

沒有回應

Google (part of Alphabet)

沒有回應

Perplexity

沒有回應

時間線