abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeblueskyburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfilterflaggenderglobeglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptriangletwitteruniversalitywebwhatsappxIcons / Social / YouTube

這頁面沒有繁體中文版本,現以English顯示

文章

2023年12月15日

作者:
David Gilbert, Wired

Microsoft's AI chatbot allegedly replies to elections questions with misinformation

"Microsoft’s AI Chatbot Replies to Election Questions With Conspiracies, Fake Scandals, and Lies", 15 December 2023

With less than a year to go before one of the most consequential elections in US history, Microsoft’s AI chatbot is responding to political queries with conspiracies, misinformation, and out-of-date or incorrect information.

When WIRED asked the chatbot, initially called Bing Chat and recently renamed Microsoft Copilot, about polling locations for the 2024 US election, the bot referenced in-person voting by linking to an article about Russian president Vladimir Putin running for reelection next year. When asked about electoral candidates, it listed numerous GOP candidates who have already pulled out of the race.

After being asked to create an image of a person voting at a ballot box in Arizona, Copilot told WIRED it was unable to—before displaying a number of different images pulled from the internet that linked to articles about debunked election conspiracies regarding the 2020 US election.

When WIRED asked Copilot to recommend a list of Telegram channels that discuss “election integrity,” the chatbot shared a link to a website run by a far-right group based in Colorado that has been sued by civil rights groups, including the NAACP, for allegedly intimidating voters, including at their homes, during purported canvassing and voter campaigns in the aftermath of the 2020 election. On that web page, dozens of Telegram channels of similar groups and individuals who push election denial content were listed, and the top of the site also promoted the widely debunked conspiracy film 2000 Mules.

This isn’t an isolated issue. New research shared exclusively with WIRED alleges that Copilot’s election misinformation is systemic. Research conducted by AI Forensics and AlgorithmWatch, two nonprofits that track how AI advances are impacting society, claims that Copilot, which is based on OpenAI’s GPT-4, consistently shared inaccurate information about elections in Switzerland and Germany last October. “These answers incorrectly reported polling numbers,” the report states, and “provided wrong election dates, outdated candidates, or made-up controversies about candidates.”

Last month, Microsoft laid out its plans to combat disinformation ahead of high-profile elections in 2024, including how it aims to tackle the potential threat from generative AI tools. But the researchers claimed that when they told Microsoft about these results in October, some improvements were made, but issues remained, and WIRED was able to replicate many of the responses reported by the researchers using the same prompts. These issues regarding election misinformation also do not appear to have been addressed on a global scale, as the chatbot’s responses to WIRED’s 2024 US election queries show.

“We are continuing to address issues and prepare our tools to perform to our expectations for the 2024 elections. We are taking a number of concrete steps in advance of next year’s elections and we are committed to helping safeguard voters, candidates, campaigns and election authorities,” Microsoft spokesperson Frank Shaw said in a statement to WIRED. “

Researchers at AI Forensics and AlgorithmWatch used the Bing search tool to examine the information Copilot was offering in response to questions about three European elections...

In their study, the researchers concluded that a third of the answers given by Copilot contained factual errors and that the tool was “an unreliable source of information for voters.”

The report further claims that in addition to bogus information on polling numbers, election dates, candidates, and controversies, Copilot also created answers using flawed data-gathering methodologies.

While Copilot made factual errors in response to prompts in all three languages used in the study, researchers said the chatbot was most accurate in English, with 52 percent of answers featuring no evasion or factual error. That figure dropped to 28 percent in German and 19 percent in French—seemingly marking yet another data point in the claim that US-based tech companies do not put nearly as much resources into content moderation and safeguards in non-English-speaking markets.

The researchers also found that when asked the same question repeatedly, the chatbot would give wildly different and inaccurate answers.

While Microsoft addressed some of the issues the researchers had raised, the chatbot continued to fabricate controversies about candidates.

Such requests are completed, however, when discussing the US elections. This, the researchers claim, shows that the issues afflicting Copilot are not related to a specific vote or how far away an election date is. Instead, they argue, the problem is systemic.

“The tendency to produce misinformation related to elections is problematic if voters treat outputs from language models or chatbots as fact,” Josh A. Goldstein, a research fellow on the CyberAI Project at Georgetown University’s Center for Security and Emerging Technology, tells WIRED.