abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

이 페이지는 한국어로 제공되지 않으며 English로 표시됩니다.

이 내용은 다음 언어로도 제공됩니다: English, Deutsch

기사

2024년 6월 4일

저자:
Pranshu Verma & Nitasha Tiku, The Washington Post,
저자:
taz

OpenAI, Anthropic & Google DeepMind employees warn about AI risks & urge changes to ensure transparency & public debate

"AI employees warn of technology’s dangers, call for sweeping company changes", 4 June 2024

A handful of current and former employees at OpenAI and other prominent artificial intelligence companies warned that the technology poses grave risks to humanity in a ...l etter, calling on companies to implement sweeping changes to ensure transparency and foster a culture of public debate.

The letter, signed by 13 people including current and former employees at Anthropic and Google’s DeepMind, said AI can exacerbate inequality, increase misinformation, and allow AI systems to become autonomous and cause significant death. Though these risks could be mitigated, corporations in control of the software have “strong financial incentives” to limit oversight, they said.

Because AI is only loosely regulated, accountability rests on company insiders, the employees wrote, calling on corporations to lift nondisclosure agreements and give workers protections that allow them to anonymously raise concerns.

Daniel Kokotajlo, a former employee at OpenAI, said he left the start-up because of the company’s disregard for the risks of artificial intelligence.

“I lost hope that they would act responsibly, particularly as they pursue artificial general intelligence,” he said in a statement, referencing a hotly contested term referring to computers matching the power of human brains.

“They and others have bought into the ‘move fast and break things’ approach, and that is the opposite of what is needed for technology this powerful and this poorly understood,” Kokotajlo said.

Liz Bourgeois, a spokesperson at OpenAI, said the company agrees that “rigorous debate is crucial given the significance of this technology.” Representatives from Anthropic and Google did not immediately reply to a request for comment.

The employees said that absent government oversight, AI workers are the “few people” who can hold corporations accountable. They said that they are hamstrung by “broad confidentiality agreements” and that ordinary whistleblower protections are “insufficient” because they focus on illegal activity, and the risks that they are warning about are not yet regulated.

The letter called for AI companies to commit to four principles to allow for greater transparency and whistleblower protections. Those principles are a commitment to not enter into or enforce agreements that prohibit criticism of risks; a call to establish an anonymous process for current and former employees to raise concerns; supporting a culture of criticism; and a promise to not retaliate against current and former employees who share confidential information to raise alarms “after other processes have failed.”