abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb
Opinion

25 Jul 2023

Author:
Phil Bloomer, Executive Director, BHRRC

Taming the monster: Artificial intelligence & the duty of care

Last November, ChatGPT was released into our world with less safety regulation than a new model of toaster, and seemingly less concern about the harm it might create. Since then, a raft of generative artificial intelligence (AI) apps has been rushed to market.

There is little doubt generative AI can bring enormous benefits to our societies – ranging from new medicines to scientific research. But, like social media apps two decades ago, AI technology is currently released into a Wild West market with no effective regulation to direct its use to social and public benefit, nor to prevent its enormous potential for harm. Governments, companies, investors, unions and civil society are all raising the alarm. The dangers are real and wide-ranging from extreme disinformation to mass surveillance, uncontrolled job losses, child sexual abuse images and gender violence, and discrimination and ballooning fraud.

Amid the fanfare of AI’s hyperbole, and future valuations in trillions of dollars, legislators are scrambling to design laws that show their electors that our democracies have the power to direct new technologies to the common good. But this comes amid dark warnings the ‘genie is out of the bottle’ and regulation is futile, as it can never catch up with exponential technological development.

This is dangerous nonsense from vested interests and ideologues. We have in our hands powerful legal and regulatory tools that require companies demonstrate a ‘duty of care’ in designing and producing their goods so they are safe for release and use. These laws demand companies assess the risk of their products and demonstrate clear efforts to mitigate them, before and after the product is released onto the market (rather than the Sisyphean task of regulating for every eventuality after release). Toaster models are tested; house and sky-scraper design is assessed; new models of car and lorry must meet exacting safety standards: in democratic societies, this regulatory approach in the physical realm usually works well – including where design advances quickly. The same method is available with regard to tech companies and their human rights impacts in the digital realm. And with fast-moving technology, this approach future-proofs our societies’ regulations and the rights of people: it is the companies, that launch and profit from these technologies that must assess the human rights risks of new digital designs and ensure they are safe, or face heavy penalties.

Meta, Google, Microsoft, Apple, and the many smaller tech companies would have to immediately change the calculus of risk in their boardrooms around the development and release of generative AI, and their other technologies. Otherwise, those harmed by irresponsible release (and they will be many) can demand justice and remediation, and administrative authorities will take enforcement action.

The European Union is currently in the final stages of approving perhaps the most powerful and relevant legislation: the Corporate Sustainability Due Diligence Directive (CSDDD). In essence, this does not seek to regulate for every harm companies might create for workers, communities, consumers or society. Rather it demands companies assess likely and severe human rights and environmental risks and impacts their business model generates across their full value chain. They must then take reasonable steps to prevent risks, or end and remedy the harm. If they fail in the duty of care, then they face civil liability risks and costly administrative punishments. This approach now needs to be applied robustly to the digital realm.

Meta, Google, Microsoft, Apple, and the many smaller tech companies would have to immediately change the calculus of risk in their boardrooms around the development and release of generative AI, and their other technologies. Otherwise, those harmed by irresponsible release (and they will be many) can demand justice and remediation, and administrative authorities will take enforcement action. This should also be extended to criminal liability given the scale of harm that can be created, and the need to focus company directors’ minds as much on the dangers as to the potential profits of rash early-release. These duties should extend to investors as they act as crucial gatekeepers in deciding what comes to market.

Responsible tech companies will welcome a duty of care approach. Upstream and continued investment in their human rights and environmental due diligence will soon become far less costly than the price of liability for reckless product releases. And the law creates both a level playing field, preventing reckless firms from under-cutting responsible companies' strategies, as well as legal certainty. Rising public concern about the power and irresponsibility of tech giants is driving active pursuit of legal accountability. Courts, regulators, and politicians are answering this call in greater numbers. Just recently, US regulators announced a sweeping probe into the human harm that ChatGPT may be generating.

The European Union, and its member states which will legislate the Directive, must quickly strengthen the IT approach in the CSDDD to the tech sector. This should include confirming the sector as high risk which leads to medium-sized companies’ inclusion alongside the giants. It should also include the duty of care for companies’ full value chain – including the impact of their products and services. We also urgently need a multilateral approach from the tech powerhouses of EU, USA, China, Brazil, and India to agree coherent legislation for the public good, based on the international human rights standards they are all party to.

Due diligence legislation is powerful but is not a silver bullet for safeguarding human rights in the face of fast paced tech expansion. Other initiatives, such as the EU's proposed AI Act, have the potential to add to the powerful foundation of the CSDDD.

Exponential technological advances await us. Whoever controls these technologies will gain enormous power and wealth. Will that be a tiny elite of tech executives, or our democratic societies? Collectively, we still have the agency to tame the monster to deliver more caring, equitable and informed societies. Insisting now on tech’s duty of care is an immediate way to help create that future for us and for generations that follow.

By Phil Bloomer, Executive Director, Business & Human Rights Resource Centre