abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

このページは 日本語 では利用できません。English で表示されています

記事

2023年6月3日

著者:
The Guardian staff

US colonel causes stir over potential challenges when using generative AI in conflict

"US colonel retracts comments on simulated drone attack ‘thought experiment’", 3 June 2023

A US air force colonel “misspoke” when he said at a Royal Aeronautical Society conference last month that a drone killed its operator in a simulated test because the pilot was attempting to override its mission, according to the society.

The confusion had started with the circulation of a blogpost from the society, in which it described a presentation by Col Tucker “Cinco” Hamilton, the chief of AI test and operations with the US air force and an experimental fighter test pilot, at the Future Combat Air and Space Capabilities Summit in London in May.

According to the blogpost, Hamilton had told the crowd that in a simulation to test a drone powered by artificial intelligence and trained and incentivized to kill its targets, an operator instructed the drone in some cases not to kill its targets and the drone had responded by killing the operator.

The comments sparked deep concern over the use of AI in weaponry and extensive conversations online. But the US air force on Thursday evening denied the test was conducted. The Royal Aeronautical Society responded in a statement on Friday that Hamilton had retracted his comments and had clarified that the “rogue AI drone simulation” was a hypothetical “thought experiment”...

...While the simulation Hamilton spoke of did not actually happen, Hamilton contends the “thought experiment” is still a worthwhile one to consider when navigating whether and how to use AI in weapons. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI,” he said in a statement clarifying his original comments.

In a statement to Insider, the US air force spokesperson Ann Stefanek said the colonel’s comments were taken out of context. “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology,” Stefanek said.

タイムライン

プライバシー情報

このサイトでは、クッキーやその他のウェブストレージ技術を使用しています。お客様は、以下の方法でプライバシーに関する選択肢を設定することができます。変更は直ちに反映されます。

ウェブストレージの使用についての詳細は、当社の データ使用およびクッキーに関するポリシーをご覧ください

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

クッキーのアナリティクス

ON
OFF

When you access our website we use Google Analytics to collect information on your visit. Accepting this cookie will allow us to understand more details about your journey, and improve how we surface information. All analytics information is anonymous and we do not use it to identify you. Google provides a Google Analytics opt-out add on for all popular browsers.

Promotional cookies

ON
OFF

We share news and updates on business and human rights through third party platforms, including social media and search engines. These cookies help us to understand the performance of these promotions.

本サイトにおけるお客様のプライバシーに関する選択

このサイトでは、必要なコア機能を超えてお客様の利便性を高めるために、クッキーやその他のウェブストレージ技術を使用しています。