Meta's response to ITUC allegations of corporate undermining of democracy
Meta was one of seven companies identified as a corporate underminer of democracy by the ITUC in 2024. More information on the allegations against Meta from the ITUC can be read here.
Meta response to the Business & Human Rights Resource Centre, 6 November 2024:
"...
Thank you for your letter of October 25, asking for our response to the inclusion of Meta in the ITUC’s list of corporate underminers, as summarized in a September 23 article in The Guardian. We believe it’s important to communicate with rights holders about our human rights work in an open and transparent way.
We note the ITUC’s statement “The ITUC’s view that the root cause of the crisis facing democracy is “the prevailing neoliberal, corporate-dominated global economy.”
Meta is committed to respecting human rights as set out in the UN Guiding Principles on Business and Human Rights (UNGPs). Human rights also guide our decisions when developing responsible innovation practices, including when building, testing and deploying products and services enabled by Artificial Intelligence (AI).
...
For this reason, we prioritize our work based on our 2022 Comprehensive Human Rights Salient Risk Assessment, which analyzed impacts across all internationally recognized human rights. The analysis prioritized the eight most salient risks using UN Guiding Principles on Business and Human Rights criteria, summarized in Attachment A.
....
Our prioritization is based on the guidance of the UN Guiding Principles on Business and Human Rights and our Corporate Human Rights Policy, and enriched by stakeholder engagement organized around multiple dimensions of diversity. For example, prioritization is necessary to enable the use of our products and services by vulnerable categories of individuals — such as youth — within the broad and diverse overall user population.
...
With regard to the topics raised in the ITUC campaign materials, you may find the following resources useful:
On dangerous organizations:
Our Dangerous Organizations policies and definitions are part of our Community Standards and are public. We remove groups and accounts that violate our policies. This is an adversarial space, where actors constantly try to find new ways around our policies, which is why we keep investing heavily in people, technology, research and partnerships to keep our platforms safe.
On hate in ads:
Meta does not profit from hate. Billions of people use Facebook and Instagram because they have good experiences — they don’t want to see hateful content, our advertisers don’t want to see it, and we don’t want to see it. There is no incentive for us to do anything but remove it.
Despite our ongoing investments, we know that there will be examples of things we miss or we take down in error, as both machines and people make mistakes. That's why ads can be reviewed multiple times, including once they go live.
On algorithms:
Please see details of how our AI systems rank content here: How AI Influences What You See on Facebook and Instagram | Meta
On lobbying:
Our US lobbying disclosures are available at https://about.meta.com/facebook-political-engagement/, and our entry in the EU Disclosure Register can be found at:organisation detail - European Union
On safeguarding elections:
We invest a huge amount of effort and resources to help protect elections online — not just during election periods but at all times.
We also understand that we have a responsibility to make our technologies safe and secure, so people can express their unique voice, be heard and exchange diverging ideas and information. Please see detailed information here:
https://www.facebook.com/business/m/election-integrityand in our human rights report.
On the Canadian Online News Act:
In October 2023 Meta announced the only way to reasonably comply with this legislation was to end news availability for people in Canada. Please see further details here: Changes to News Availability on Our Platforms in Canada | Meta
On content moderator well being:
We take the support of content reviewers seriously, which is why we have clear contracts with each of the companies that help review content on Facebook and Instagram that detail our expectations in a number of areas, including counseling, training and other support.
We provide technical solutions to limit exposure to graphic material as much as possible. Those who review content on Facebook and Instagram are able to customize the content review tool, so that graphic content appears totally blurred, in black and white, blurred for the first frame, played without sound, or opted out of auto-play.
We require all of the companies we work with to provide 24/7 on-site support with trained practitioners, an on-call service, and access to private healthcare from the first day of employment.
They are also contractually obliged to pay their employees who review content on Facebook and Instagram above the industry standard in the markets they operate.
On government censorship and user data:
On our mitigations related to overbroad government requests related to privacy and freedom of expression: please see our transparency center on content restrictions based on local law and government requests for user data, as well as pages 45-48 of our most recent human rights report.
We’re also sharing the infographic “What’s the Life Cycle of a Government Request”, as Attachment B, in case it might be useful. It’s also reproduced on page 48 of our most recent human rights report.
....