abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb
Opinion

14 Jan 2025

Author:
Phil Bloomer,
Author:
Gayatri Khandhadai

Reckless gamble: Meta’s decision on disinformation endangers its 4 billion users

Shutterstock (licensed)

Meta, the US $1.66 trillion tech giant, could hardly have chosen a worse moment to announce, with pride, that it is ‘getting rid’ of its fact-checking programme, starting in the USA. With disinformation spreading like wildfire, now turbo-charged by AI, fact-checking by this social media giant has never been more necessary. This hasty announcement, and the decision to remove protection on topics such as gender and immigrants, exposes close to 4 billion Meta users to heightened manipulation and harm which has been denounced by Meta’s own employees and fact-checkers. In an atmosphere of worsening polarisation, conflict and authoritarianism, this move gives the ruthless and unscrupulous a free pass to lie and vilify with impunity.

The danger of this approach is that it effectively provides no protection against disinformation: it is instead primarily geared at driving user engagement.

‘Evil thrives when good people do nothing’, to adapt a quote – and ‘nothing’ is what Meta proposes to do. Instead, it will outsource this work to its users through ‘community notes’. The danger of this approach is that it effectively provides no protection against disinformation: it is instead primarily geared at driving user engagement. Concerned and community-minded users will go up against the well-resourced and sophisticated bots and algorithms of vested interests that will be programmed to silence their critics with venal tactics.

In the wake of Mark Zuckerberg’s bombshell, Google immediately reported an explosion in searches for how to delete or cancel Facebook, Instagram and Threads accounts. A move to leave Meta likely reflects users’ fear of what these platforms may become, and damage to their mental health. This dynamic is not without precedent. It mirrors the 70%+ drop in the valuation of X (Twitter) in the last two years as users, and advertisers, have fled the platform. Unsurprisingly, Meta’s own shares fell by 2% on the day of the latest announcement.

The tragedy is that Meta introduced fact-checking over the last decade in response to sustained evidence that the platform was exploited to proliferate hate speech and disinformation, causing real harm. For instance, after the 6 January attack on the US Capitol in 2021, Facebook’s own report warned the narratives of its election-denying Groups may have had “substantial negative impacts including contributing materially to the Capitol riot and potentially reducing collective civic engagement and social cohesion in the years to come.” Equally in 2018, UN investigators issued their report on the situation in Myanmar, where Facebook was found to have played a determining role in building hatred against the Rohingya. Just last year Facebook was being used by traffickers to lure prospective migrant workers into scams that falsely promised jobs in Europe. As these examples show, disinformation has real life consequences.

Facebook learnt from these mistakes, and Zuckerberg and his team have prided themselves on their ethical leadership before several congressional hearings in the USA and many other countries. His latest announcement reverses course. Zuckerberg’s sudden conversion to ‘freedom of expression without protections’ has been denounced as a ‘smokescreen’ for political expediency and commercial gain at the expense of users.

After years of enjoying a deregulated ‘Wild West’ for tech, the punitive costs of this new regulation, it is unsurprising that social media giants may want to dismantle these new protective frameworks in Europe and elsewhere. But it’s all the more reason for Europe and other jurisdictions to stand firm.

Perhaps the most chilling aspect of Zuckerberg’s announcement was his appeal to the incoming U.S. administration to help dismantle the growing regulatory framework in Europe and around the world designed to protect users. In his subsequent appearance on Joe Rogan’s podcast, Zuckerberg insisted the fines Meta has been slapped with by the EU for violating laws and rights amount to ‘tariffs’ and an attack on American innovation. After years of enjoying a deregulated ‘Wild West’ for tech, the punitive costs of this new regulation (the fines under the European Digital Services Act amounted to US $800 million for anti-trust breaches and US $1.2 billion for data breaches), it is unsurprising that social media giants may want to dismantle these new protective frameworks in Europe and elsewhere. But it’s all the more reason for Europe and other jurisdictions to stand firm.

Our movement has a major challenge: not only to hold onto the new regulation that is being introduced around the world to protect social media users – particularly children and other vulnerable groups, including human rights defenders – but also to advance it further. These technologies are being rolled out, at warp speed, by many companies which are unwilling to fulfil their responsibilities under international and national laws. Regulators cannot keep up. In these conditions human rights and environmental due diligence is a highly effective approach.Companies must demonstrate they have assessed the risks their new software or hardware may generate for people and planet before it is released on our world, and that they have taken all reasonable steps to mitigate risks, or face exposure to civil liability. This approach changes the calculus of risk in social media boardrooms to include human rights care for their users alongside their extraordinary profits, or face large fines.

Let’s hope our legislators and courts have the resolve and the vocal support of concerned publics and investors uphold our collective protection.

Phil Bloomer, Executive Director

Gayatri Khandhadai, Head of Technology and Human Rights

Privacy information

This site uses cookies and other web storage technologies. You can set your privacy choices below. Changes will take effect immediately.

For more information on our use of web storage, please refer to our Data Usage and Cookies Policy

Strictly necessary storage

ON
OFF

Necessary storage enables core site functionality. This site cannot function without it, so it can only be disabled by changing settings in your browser.

Analytics cookie

ON
OFF

When you access our website we use Google Analytics to collect information on your visit. Accepting this cookie will allow us to understand more details about your journey, and improve how we surface information. All analytics information is anonymous and we do not use it to identify you. Google provides a Google Analytics opt-out add on for all popular browsers.

Promotional cookies

ON
OFF

We share news and updates on business and human rights through third party platforms, including social media and search engines. These cookies help us to understand the performance of these promotions.

Your privacy choices for this site

This site uses cookies and other web storage technologies to enhance your experience beyond necessary core functionality.