abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

هذه الصفحة غير متوفرة باللغة العربية وهي معروضة باللغة English

المحتوى متاح أيضًا باللغات التالية: English, 한국어

المقال

18 فبراير 2025

الكاتب:
Michael Biesecker, Sam Mednick And Garance Burke, The Associated Press

AP exposes Big Tech AI systems' direct role in warfare amid Israel's war in Gaza

"As Israel uses US-made AI models in war, concerns arise about tech’s role in who lives and who dies", 18 February 2025

...

Militaries have for years hired private companies to build custom autonomous weapons. However, Israel’s recent wars mark a leading instance in which commercial AI models made in the United States have been used in active warfare, despite concerns that they were not originally developed to help decide who lives and who dies.

The Israeli military uses AI to sift through vast troves of intelligence, intercepted communications and surveillance to find suspicious speech or behavior and learn the movements of its enemies. After a deadly surprise attack by Hamas militants on Oct. 7, 2023, its use of Microsoft and OpenAI technology skyrocketed, an Associated Press investigation found. The investigation also revealed new details of how AI systems select targets and ways they can go wrong, including faulty data or flawed algorithms. It was based on internal documents, data and exclusive interviews with current and former Israeli officials and company employees.

“This is the first confirmation we have gotten that commercial AI models are directly being used in warfare,” said Heidy Khlaaf, chief AI scientist at the AI Now Institute and former senior safety engineer at OpenAI. “The implications are enormous for the role of tech in enabling this type of unethical and unlawful warfare going forward.”

The rise of AI

As U.S. tech titans ascend to prominent roles under President Donald Trump, the AP’s findings raise questions about Silicon Valley’s role in the future of automated warfare. Microsoft expects its partnership with the Israeli military to grow, and what happens with Israel may help determine the use of these emerging technologies around the world.

The Israeli military’s usage of Microsoft and OpenAI artificial intelligence spiked last March to nearly 200 times higher than before the week leading up to the Oct. 7 attack, the AP found in reviewing internal company information. The amount of data it stored on Microsoft servers doubled between that time and July 2024 to more than 13.6 petabytes — roughly 350 times the digital memory needed to store every book in the Library of Congress. Usage of Microsoft’s huge banks of computer servers by the military also rose by almost two-thirds in the first two months of the war alone.

...

...Officials insist that even when AI plays a role, there are always several layers of humans in the loop.

...

The Israeli military declined to answer detailed written questions from the AP about its use of commercial AI products from American tech companies.

Microsoft declined to comment for this story and did not respond to a detailed list of written questions about cloud and AI services provided to the Israeli military. In a statement on its website, the company says it is committed “to champion the positive role of technology across the globe.” In its 40-page Responsible AI Transparency Report for 2024, Microsoft pledges to manage the risks of AI throughout development “to reduce the risk of harm,” and does not mention its lucrative military contracts.

Advanced AI models are provided through OpenAI, the maker of ChatGPT, through Microsoft’s Azure cloud platform, where they are purchased by the Israeli military, the documents and data show. Microsoft has been OpenAI’s largest investor. OpenAI said it does not have a partnership with Israel’s military, and its usage policies say its customers should not use its products to develop weapons, destroy property or harm people. About a year ago, however, OpenAI changed its terms of use from barring military use to allowing for “national security use cases that align with our mission.”

The human toll of AI

It’s extremely hard to identify when AI systems enable errors because they are used with so many other forms of intelligence, including human intelligence, sources said. But together they can lead to wrongful deaths.

...

An Israeli intelligence officer told the AP that AI has been used to help pinpoint all targets in the past three years. In this case, AI likely pinpointed a residence, and other intelligence gathering could have placed a person there. At some point, the car left the residence.

Humans in the target room would have decided to strike. The error could have happened at any point, he said: Previous faulty information could have flagged the wrong residence, or they could have hit the wrong vehicle.

The AP also saw a message from a second source with knowledge of that airstrike who confirmed it was a mistake, but didn’t elaborate.

A spokesperson for the Israeli military denied that AI systems were used during the airstrike itself, but refused to answer whether AI helped select the target or whether it was wrong. The military told the AP that officials examined the incident and expressed “sorrow for the outcome.”

How it works

Microsoft and the San Francisco-based startup OpenAI are among a legion of U.S. tech firms that have supported Israel’s wars in recent years.

Google and Amazon provide cloud computing and AI services to the Israeli military under “Project Nimbus,” a $1.2 billion contract signed in 2021 when Israel first tested out its in-house AI-powered targeting systems. The military has used Cisco and Dell server farms or data centers. Red Hat, an independent IBM subsidiary, also has provided cloud computing technologies to the Israeli military, and Palantir Technologies, a Microsoft partner in U.S. defense contracts, has a “strategic partnership” providing AI systems to help Israel’s war efforts.

Google said it is committed to responsibly developing and deploying AI “that protects people, promotes global growth, and supports national security.” Dell provided a statement saying the company commits to the highest standards in working with public and private organizations globally, including in Israel. Red Hat spokesperson Allison Showalter said the company is proud of its global customers, who comply with Red Hat’s terms to adhere to applicable laws and regulations.

Palantir, Cisco and Oracle did not respond to requests for comment. Amazon declined to comment.

...

Deep ties

Among U.S. tech firms, Microsoft has had an especially close relationship with the Israeli military spanning decades.

That relationship, alongside those with other tech companies, stepped up after the Hamas attack. Israel’s war response strained its own servers and increased its reliance on outside, third-party vendors, according to a presentation last year by the military’s top information technology officer. As she described how AI had provided Israel “very significant operational effectiveness” in Gaza, the logos of Microsoft Azure, Google Cloud and Amazon Web Services appeared on a large screen behind her.

“We’ve already reached a point where our systems really need it,” said Col. Racheli Dembinsky, commander of the Center of Computing and Information Systems, known by its Hebrew acronym, Mamram.

...

Pushback from workers

The relationship between tech companies and the Israeli military also has ramifications in the U.S., where some employees have raised ethical concerns.

In October, Microsoft fired two workers for helping organize an unauthorized lunchtime vigil for Palestinian refugees at its corporate campus in Redmond, Washington. Microsoft said at the time that it ended the employment of some people “in accordance with internal policy” but declined to give details.

Hossam Nasr, one of the employees fired by Microsoft who works with the advocacy group No Azure for Apartheid, said he and former colleagues are pushing for Microsoft to stop selling cloud and AI services to the Israeli military.

“Cloud and AI are the bombs and bullets of the 21st century,” Nasr said. “Microsoft is providing the Israeli military with digital weapons to kill, maim and displace Palestinians, in the gravest moral travesty of our time.”

In April, Google fired about 50 of its workers over a sit-in at the company’s California headquarters protesting the war in Gaza.

...

Google said the employees were fired because they disrupted work spaces and made colleagues feel unsafe. Google did not respond to specific questions about whether it was contracted to build a sovereign cloud for the Israeli military and whether it provided restrictions on the wartime use of its AI models.

Gaza is now in an uneasy ceasefire. But recently, the Israeli government announced it would expand its artificial intelligence developments across all its military branches.

Meanwhile, U.S. tech titans keep consolidating power in Washington.

...

In a new book set to be published Tuesday, Palantir CEO Alexander Karp calls for the U.S. military and its allies to work closely with Silicon Valley to design, build and acquire AI weaponry, including “the unmanned drone swarms and robots that will dominate the coming battlefield.”

“The fate of the United States, and its allies, depends on the ability of their defense and intelligence agencies to evolve, and briskly,” Karp wrote, according to an advance copy obtained by the AP.

After OpenAI changed its terms of use last year to allow for national security purposes, Google followed suit earlier this month with a similar change to its public ethics policy to remove language saying it wouldn’t use its AI for weapons and surveillance. Google said it is committed to responsibly developing and deploying AI “that protects people, promotes global growth, and supports national security.”

As tech companies jockey for contracts, those who lost relatives still search for answers.

...

معلومات الخصوصية

هذا الموقع يستخدم ملفات تعريف الارتباط وتكنولوجيا التخزين الشبكي. يمكنك ضبط خيارات الخصوصية أدناه. تسري التغييرات فورًا.

للمزيد من المعلومات عن استخدامنا للتخزين الشبكي، انظر سياستنا في استخدام البيانات وملفات تعريف الارتباط

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

ملفات تعريف الارتباط التحليلية

ON
OFF

When you access our website we use Google Analytics to collect information on your visit. Accepting this cookie will allow us to understand more details about your journey, and improve how we surface information. All analytics information is anonymous and we do not use it to identify you. Google provides a Google Analytics opt-out add on for all popular browsers.

Promotional cookies

ON
OFF

We share news and updates on business and human rights through third party platforms, including social media and search engines. These cookies help us to understand the performance of these promotions.

خيارات الخصوصية على هذا الموقع

هذا الموقع يستخدم ملفات تعريف الارتباط وتكنولوجيا التخزين الشبكي لتحسين تجربتك لما يتجاوز الخصائص الرئيسية الضرورية.