Silicon Shadows: Venture capital, human rights, and the lack of due diligence
Amnesty International & BHRRC
Venture capitalists (VCs) wield significant influence over the direction of technological innovation. Their early-stage investment decisions play a large role in determining which technologies make it to market, yet the prioritization of profit often means overlooking salient human rights risks. Given the extraordinary amount of funding being poured into Generative AI and the implications that this technology can have on our economies, democracies and societies, VCs should be prioritising responsible investment to avoid facilitating the development and deployment of biased or undeveloped artificial intelligence (AI) systems.
To assess the extent to which leading VC firms conduct human rights due diligence on their investments in companies developing Generative AI, Amnesty International USA and the Business & Human Rights Resource Centre surveyed the 10 largest venture capital funds and the two largest start-up accelerators most actively investing in Generative AI.
We found that leading venture capital (VC) firms are failing in their responsibility to respect human rights, especially in relation to new Generative AI technologies.
Leading VC firms have not implemented basic human rights due diligence processes to ensure the companies and technologies they fund are rights-respecting, as mandated by the UN Guiding Principles on Business and Human Rights (UNGPs). This is particularly concerning given the potentially transformative impacts Generative AI technologies could have on our economies, politics and societies.
The VC firms and start-up accelerators surveyed, all based in the US, were Insight Partners, Tiger Global Management, Sequoia Capital, Andreessen Horowitz, Lightspeed Venture Partners, New Enterprise Associates, Bessemer Venture Partners, General Catalyst Partners, Founders Fund, Technology Crossover Ventures, Techstars and Y Combinator.
“Generative AI is poised to become a transformative technology that could potentially touch everything in our lives. While this emerging technology presents new opportunities, it also poses incredible risks, which, if left unchecked, could undermine our human rights. Venture capital is investing heavily in this field, and we need to ensure that this money is being deployed in a responsible, rights-respecting way.”Michael Kleinman, Director of AIUSA’s Silicon Valley Initiative
With the launch of ChatGPT in 2023, the appetite for generative artificial intelligence (Generative AI) investment has been described as ‘insatiable’. Within the first six months of 2023, funding for Generative AI-based tools and solutions leaped to more than five times what they were in 2022, with venture capital (VC) firms playing a substantial role in financing them. Since the global Generative AI market is expected to reach US$200.73 billion by 2032, some are positing Generative AI as the fuel for the next technological “arms race”.
Salient human rights risks linked to generative AI
Generative AI can facilitate physical harm, psychological harm, reputational harm and social stigmatization, economic instability, loss of autonomy or opportunities, and further entrench systemic discrimination to individuals and communities. The report delves into the most salient human rights risks, though this is not an exhaustive list, with examples for each:
- Abuses of the right to privacy
- Perpetuation of algorithmic bias and stereotypes
- Amplification of misinformation and disinformation
- Jeopardizing physical safety, mental health and human dignity
- Undermining labor rights
Key findings
This analysis revealed that the majority of leading VC firms and start-up accelerators are ignoring their responsibility to respect human rights when investing in Generative AI start-ups:
- Only three out of the 12 firms mention a public commitment to considering responsible technology in their investments;
- Only one out of the 12 firms mentions an explicit commitment to human rights;
- Only one out of the 12 firms states it conducts due diligence for human rights-related issues when deciding to invest in companies; and
- Only one of the 12 firms currently supports its portfolio companies on responsible technology issues.
The report calls for VC firms to adhere to the UNGPs, which stipulate that both investors and investee companies must take proactive and ongoing steps to identify and respond to Generative AI’s potential or actual human rights impacts. This entails undertaking human rights due diligence to identify, prevent, mitigate and account for how they address their human rights impacts.
“It is, of course, possible to see the great potential of new technologies when they are designed using a human-centric approach. Unfortunately, the story of Generative AI thus far has largely been one of maximising profits at the expense of people, especially marginalised groups. But it isn’t too late for investors, companies, governments and rights-holders to take back control over how we want this technology to be designed, developed and deployed. There are certain decisions that we should not allow Generative AI to make for us."Meredith Veit, Tech & Human Rights Researcher, Business & Human Rights Resource Centre
Further reading
Failing Grade
Amnesty International report exploring how university investment offices often fail to conduct human rights due diligence when investing in venture capital funds
Technology Company Dashboard
Explore our data on 80+ tech companies
Navigating the surveillance technology ecosystem
A human rights due diligence guide for investors