abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

هذه الصفحة غير متوفرة باللغة العربية وهي معروضة باللغة English

المقال

1 ديسمبر 2023

الكاتب:
Harry Davies, Bethan McKernan & Dan Sabbagh, The Guardian

OPT/Israel: Report exposes the role of AI in Israel's targeting of civilians & civilian infrastructure

"‘The Gospel’: how Israel uses AI to select bombing targets in Gaza", 1 December 2023

Israel’s military has made no secret of the intensity of its bombardment of the Gaza Strip.

There has, however, been relatively little attention paid to the methods used by the Israel Defense Forces (IDF) to select targets in Gaza, and to the role artificial intelligence has played in their bombing campaign.

The slowly emerging picture of how Israel’s military is harnessing AI comes against a backdrop of growing concerns about the risks posed to civilians as advanced militaries around the world expand the use of complex and opaque automated systems on the battlefield.

From 50 targets a year to 100 a day

In early November, the IDF said “more than 12,000” targets in Gaza had been identified by its target administration division.

However a short statement on the IDF website claimed it was using an AI-based system called Habsora (the Gospel, in English) in the war against Hamas to “produce targets at a fast pace”.

Multiple sources familiar with the IDF’s targeting processes confirmed the existence of the Gospel to +972/Local Call, saying it had been used to produce automated recommendations for attacking targets, such as the private homes of individuals suspected of being Hamas or Islamic Jihad operatives.

Aviv Kochavi, who served as the head of the IDF until January, has said the target division is “powered by AI capabilities” and includes hundreds of officers and soldiers.

In an interview published before the war, he said it was “a machine that produces vast amounts of data more effectively than any human, and translates it into targets for attack”.

According to Kochavi, “once this machine was activated” in Israel’s 11-day war with Hamas in May 2021 it generated 100 targets a day. “To put that into perspective, in the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.”

Precisely what forms of data are ingested into the Gospel is not known. But experts said AI-based decision support systems for targeting would typically analyse large sets of information from a range of sources, such as drone footage, intercepted communications, surveillance data and information drawn from monitoring the movements and behaviour patterns of individuals and large groups.

One official, who worked on targeting decisions in previous Gaza operations, said the IDF had not previously targeted the homes of junior Hamas members for bombings. They said they believed that had changed for the present conflict, with the houses of suspected Hamas operatives now targeted regardless of rank.

“That is a lot of houses,” the official told +972/Local Call. “Hamas members who don’t really mean anything live in homes across Gaza. So they mark the home and bomb the house and kill everyone there.”

Targets given ‘score’ for likely civilian death toll

In the IDF’s brief statement about its target division, a senior official said the unit “produces precise attacks on infrastructure associated with Hamas while inflicting great damage to the enemy and minimal harm to non-combatants”.

The precision of strikes recommended by the “AI target bank” has been emphasised in multiple reports in Israeli media.

However, experts in AI and armed conflict who spoke to the Guardian said they were sceptical of assertions that AI-based systems reduced civilian harm by encouraging more accurate targeting.

A lawyer who advises governments on AI and compliance with humanitarian law said there was “little empirical evidence” to support such claims. Others pointed to the visible impact of the bombardment.

“Look at the physical landscape of Gaza,” said Richard Moyes, a researcher who heads Article 36, a group that campaigns to reduce harm from weapons.

Multiple sources told the Guardian and +972/Local Call that when a strike was authorised on the private homes of individuals identified as Hamas or Islamic Jihad operatives, target researchers knew in advance the number of civilians expected to be killed.

The source said there had been occasions when “there was doubt about a target” and “we killed what I thought was a disproportionate amount of civilians”.

‘Mass assassination factory’

Sources familiar with how AI-based systems have been integrated into the IDF’s operations said such tools had significantly sped up the target creation process.

“We prepare the targets automatically and work according to a checklist,” a source who previously worked in the target division told +972/Local Call. “It really is like a factory. We work quickly and there is no time to delve deep into the target. The view is that we are judged according to how many targets we manage to generate.”

A separate source told the publication the Gospel had allowed the IDF to run a “mass assassination factory” in which the “emphasis is on quantity and not on quality”. A human eye, they said, “will go over the targets before each attack, but it need not spend a lot of time on them”.

For some experts who research AI and international humanitarian law, an acceleration of this kind raises a number of concerns.

Dr Marta Bo, a researcher at the Stockholm International Peace Research Institute, said that even when “humans are in the loop” there is a risk they develop “automation bias” and “over-rely on systems which come to have too much influence over complex human decisions”.

Moyes, of Article 36, said that when relying on tools such as the Gospel, a commander “is handed a list of targets a computer has generated” and they “don’t necessarily know how the list has been created or have the ability to adequately interrogate and question the targeting recommendations”.