abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Esta página não está disponível em Português e está sendo exibida em English

História

2 Mai 2023

The application of generative AI to warfare raises human rights concerns

Palantir YouTube Demo

Since the launch of ChatGPT in January 2023, generative artificial intelligence (AI) tools have been applied to a variety of industries. The defense sector is no exception.

Defense companies are beginning to apply generative AI to their use of autonomous weapons systems, without clear explanations as to how salient human rights risks will be effectively mitigated. This could lead to situations where biased or inaccurate responses to generative AI queries are relied upon to make life-or-death decisions in times of conflict, without much clarity surrounding accountability or access to remediation. And what happens when autonomous weapons systems malfunction, are hacked or fall into the wrong hands?

As explained by the UN Working Group on Business and Human Rights, heightened due diligence is required for businesses operating in conflict-affected areas, and there are a plethora of salient human rights risks that technology companies must consider in this context. The articles below highlight the various concerns raised by civil society about the development and deployment of military and defense products that are powered by generative AI, including the need for greater transparency surrounding how these AI models are trained, how mistakes are corrected and how human rights violations during times of conflict will be prevented.

For example, Palantir states that the use of "large language models (LLMs) and algorithms must be controlled in the highly regulated and sensitive context" of war to ensure that they are used in a "legal and ethical way", but does not explain further how the company will work to address salient human rights risks including the right to life, the right to privacy and the right to information (namely, mitigating errors based on misinformation). These salient risks apply to the soldiers who are fighting on the ground, civilians caught up in the conflict and vulnerable groups that are being displaced.

The president of the International Committee of the Red Cross (ICRC) announced the following in April 2023:

"We are witnessing the rapid development of autonomous weapon systems, including those controlled by artificial intelligence, together with military interest in loosening the constraints on where – or against what – those weapons will strike. These developments led the International Committee of the Red Cross to call on governments to establish new international constraints that are clear and binding."

Palantir Technologies responded to our request for comment stating that "...[W]e outline considerations undergirding our belief that “providers of technology involved in non-lethal and especially lethal use of force bear a responsibility to understand and confront the relevant ethical concerns and considerations surrounding the application of their products” and that “[t]his responsibility becomes all the more important the deeper technology becomes embedded in some of the most consequential decision- making processes...” Click here to read the company's full response.

Respostas da empresa

Palantir Technologies Ver resposta

Linha do tempo

Privacy information

Este site usa cookies e outras tecnologias de armazenamento na web. Você pode definir suas opções de privacidade abaixo. As alterações entrarão em vigor imediatamente.

Para obter mais informações sobre nosso uso de armazenamento na web, consulte nossa Política de Uso de Dados e de Cookies

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

Cookies analíticos

ON
OFF

Quando você acessa nosso site, usamos o Google Analytics para coletar informações sobre sua visita. A aceitação deste cookie nos permitirá entender mais detalhes sobre sua viagem, e melhorar a forma como nós colocamos as informações na superfície. Todas as informações analíticas são anônimas e não as utilizamos para identificá-lo. O Google fornece uma opção de não inclusão no Google Analytics para todos os navegadores populares.

Cookies promocionais

ON
OFF

Compartilhamos notícias e atualizações sobre empresas e direitos humanos através de plataformas de terceiros, incluindo mídias sociais e mecanismos de busca. Estes cookies nos ajudam a entender o desempenho destas promoções.

Suas escolhas de privacidade para este site

Este site usa cookies e outras tecnologias de armazenamento da web para aprimorar sua experiência além da funcionalidade básica necessária.