Why regulation is essential to tame tech's rush for AI
Unsplash
by Gayatri Khandhadai, Head of Technology and Human Rights, Business & Human Rights Resource Centre
In the span of just a few weeks, the AI landscape in the USA took a troubling turn. Grok AI, Elon Musk’s chatbot, veered wildly between Islamophobic and antisemitic rhetoric, exposing the deep flaws of unregulated generative systems. Days later, the US administration released its new AI Action Plan: a sweeping declaration that is unapologetically pro-business, at almost any cost.
Touted as a visionary push for US tech dominance, it dismantles safeguards for workers and users, fast-tracks infrastructure regardless of environmental risks and champions the export of US-made AI. Together these developments signal an AI future the prioritises speed and market dominance, at the expense of its emancipatory potential – posing danger to human dignity, rights and sustainability.
It also embeds a dangerous – and wholly false – narrative: demonising regulation of multibillion dollar enterprises as a barrier to progress and profitability.
Due diligence backed by regulation is the real innovation
We are often told that technology moves too fast for regulation to keep pace, making meaningful safeguards impossible. In one sense this is true - with each advance a new form of harm emerges, rendering existing legislation potentially obsolete. Reactive regulation of a sector this fast-moving is futile.
But the sector’s swift pace and relentless innovation can’t exempt it from oversight: it simply requires a different approach.
If refocused away from the symptoms, such as unfettered access to harmful content or discriminatory algorithms, to the cause – inadequate risk assessment in the development and deployment stages of AI – regulation can absolutely be an answer to the vast and unanticipated harms of these products and services. All while safeguarding its ability to innovate and fulfil its emancipatory potential.
How? By mandating a duty of care from AI companies and investors before products reach the market.
Just as car manufacturers must assess the entirety of their operations during development, products as well as plants, to mitigate against harm to people and the environment, AI companies should assess and mitigate potential risks of their products and services, from data centres to apps, and from the mental health of the young to the systematic scamming of the old.
This isn’t radical. It is the foundation of the Corporate Sustainability Due Diligence Directive (CSDDD) currently progressing in the EU, which mandates companies evaluate human rights and environmental risks across their value chains and then prevent or remedy them. That countries around the world are looking seriously at implementing similar approaches to corporate accountability – from South Korea to Brazil – demonstrates its efficacy and appeal.
A due diligence approach harnesses the knowledge and expertise of those best placed to anticipate and address risks – the companies and investors designing, deploying and profiting from these technologies – instead of requiring government to be constantly regulating after the fact, playing an endless game of catch-up.
Innovation or safety? It can be both
The US plan sets up the false dichotomy between innovation and rights-based safeguards. This framing obscures the real issue. It isn’t innovation that causes harm, but a lack of accountability. When tech companies are left to innovate without oversight or responsibility, widespread harms occur.
Our research shows the extent of this harm to users and workers. Content moderators are traumatised due to their jobs; workers deep in supply chains are exploited by coercive algorithms; and social media platform silence dissent, and demean and endanger vulnerable groups – including women and LGBTQI+ people, while AI-powered job sites apply gender and racial discrimination.
These are not unfortunate side effects. They are baked into business models that prioritise growth over responsibility. The most obvious case in point is social media platforms, which reward addictive content (generating outrage, anger or fear) with funding as it increases advertising revenue.
But smart regulation dismantles the idea that we have to choose between a profitable, cutting-edge tech sector and public safety. Instead, it promotes innovation of socially and environmentally useful technology – what our societies need and want, rather than those that drive social media addiction, hate and fear.
Regulation designed to keep pace with exponential growth
The advantage of a due diligence framework is that it is anticipatory. Legislators cannot predict every misuse or future risk in such a fast-evolving sector. But laws mandating risk identification and mitigation at every stage offer a way to future-proof protections without stifling innovation. It levels the playing field by ensuring that all companies embed safeguards in their products, not just the responsible ones.
The EU’s AI Act, for example, establishes just this risk-based model, setting out clear rules according to the level of risk posed to individuals and society. Other jurisdictions are following suit, even as the US shifts into deregulatory overdrive. Let’s hope these legislators can hold their nerve in the interest of the public good, rather than be emboldened or intimidated to replicate America’s recklessness.
Companies and investors - key to a safer, more just digital world
Companies and investors – the gatekeepers of the AI economy – decide what comes to market. They hold the power to ensure that workers and users are safe, and must be held responsible when harms occur.
The US plan’s celebration of speed at the expense of human welfare and safety is a warning. Following this model risks entrenching a race to the bottom – not only in labour and environmental standards, but in our right to dignity, privacy and freedom from harm.
A better future is still within reach, but only if we reject the false trade-off between innovation and rights.
The real test of leadership is not who moves fastest at any cost, but who moves most responsibly.