abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

このページは 日本語 では利用できません。English で表示されています

記事

2024年8月29日

著者:
Miranda Sissons, Meta

Meta replies to UN Working Group on business & human rights letter about its operations in OPT

August 29, 2024

Mr Chairman, Special Rapporteurs;

Thank you for your joint letter (Ref.: AL OTH 20/2024) of April 18th in which you raise a number of questions about Meta’s response to events in Israel and Palestine since October 7th.

...

In the immediate aftermath of the 7 October terrorist attacks Meta implemented immediate crisis response measures, including a dedicated 24x7 cross-functional crisis response team.

In doing so, we were guided by core human rights principles, including respect for the right to life and security of person; protection of the dignity of victims; non-discrimination; and freedom of expression. We looked to the UN Guiding Principles on Business and Human Rights, to prioritize and mitigate for the most salient human rights. We also used international humanitarian law as an important reference. We publicly shared response details in our newsroom on October 13 (and updated on October 18 and December 5) in English, Arabic, and Hebrew.

During the ongoing conflict in Israel and Palestine, there has been a surge in related content on our platforms: this includes large volumes of non-violating content discussing and raising awareness of events, but also of content that violates our policies on hate speech, violence and incitement, bullying and harassment, dangerous organizations and individuals, and violent and graphic content. While our platforms are designed to support voice, we also seek to mitigate risks that may impact the safety and well-being of our community, in line with Meta’s responsibility to prevent or mitigate its most salient human rights risks.

Taking both safety and voice into consideration is difficult in peaceful contexts and even more so in conflict situations—especially those involving sanctioned entities such as Hamas.

Our response is guided by our prior crisis experience, as well as from recommendations made in the independent human rights due diligence on Israel and Palestine by Business for Social Responsibility (BSR) that we commissioned and disclosed in 2022 (and shared in an update on our implementation in September 2023).

We also have used our Crisis Policy Protocol, first launched in 2022 after extensive consultation, to guide our actions.

Our Human Rights Team has been closely involved in Meta’s response and has conducted ongoing, integrated human rights due diligence throughout, in line with our Corporate Human Rights Policy and the UN Guiding Principles on Business and Human Rights. We plan to include information on this work, as well as on our continuing efforts to address the recommendations made by BSR, in our forthcoming annual human rights report.

A major priority for us is to work to ensure that we are not amplifying harmful and inflammatory content, which is present in all relevant markets.

During critical moments with elevated risk of violence or other severe human rights risks, we may adapt our standard approach to keeping people safe while still enabling them to express themselves and initiate temporary measures to help keep people safe.

We closely monitor offline events and track platform trends: for example, how much violating content people are seeing on Facebook or Instagram and whether we’re starting to see new forms of abusive behavior that warrant changes in our response.

As we’ve detailed in our blog post describing our response to the conflict, we have adopted a number of temporary product and policy measures to help keep people safe and mitigate salient human rights risks.

We don’t implement such temporary measures lightly: we know that they can have unintended consequences, like inadvertently limiting harmless or even helpful content. And we have heard complaints from many organizations about the unintended consequences of these measures.

That’s why we seek to ensure that the steps that are taken are time limited and proportionate to the risks as we are aware of them. That’s also why our Human Rights Team is embedded within our crisis response process to carry out integrated human rights due diligence that informs our approach.

Some examples of the safety measures we implemented include:

● Changes to how we recommend unconnected content...

● Adjustments to confidence thresholds for automatically actioning content...

● Blocking certain hashtags from search...

● Product changes to address unwanted and problematic comments...

● Launching the Lock Your Profile tool in the region...

We also provided tools that made it easier for people to bulk delete comments on their posts, and we stopped showing the first one or two comments on a post automatically in Feed.

At the same time, we also made other changes specifically aimed at ensuring we were protecting voice. For example, in response to a large spike in usage of our products, we temporarily adjusted some automated rate limits designed to prevent spam to make them more permissive, reducing the risk of restrictions on legitimate users.

For some policy areas, like the most graphic types of violent and graphic content, we’re removing violating content without applying strikes—the penalties for violations that result in escalating account restrictions as they accumulate—to ensure we’re not overly penalizing or restricting users who are trying to raise awareness of the conflict’s impacts.

Separately, we globally limit recommendations of unconnected content related to politics and social issues, including conflict, across Facebook and Instagram.

Our Community Standards prohibit a wide range of potentially harmful content, including violence and incitement, hate speech, dangerous organizations and individuals, and violent and graphic content. ...

Our policies are designed to address content that may amount to incitement to violence, hatred, or genocide. We have adapted the principles of the Rabat Plan of Action into actionable content policy tools, including escalation-based frameworks to evaluate speech attacking concepts (as opposed to people) and content involving state threats to use force.

In rare cases, we allow content that may violate our policies if it's newsworthy and if keeping it visible is in the public interest. We only do this after conducting a thorough review that weighs the public interest against the risk of harm. We look to international human rights standards, as reflected in our Corporate Human Rights Policy, to help make these judgments. For content we allow that may be sensitive or disturbing, we include a warning screen. In these cases, we can also limit the ability to view the content to adults, aged 18 and older.

We removed multiple pieces of content from the Israeli group "Tzav9" for organizing efforts to blockade humanitarian aid trucks, which violated our Coordinating Harm policy, and led to the disabling of their Facebook and Instagram accounts. These actions were in line with international humanitarian law and the need to allow and facilitate the rapid and unimpeded passage of humanitarian relief for civilians in need.

Overall, our response has benefited significantly from our prior crisis experience, as well as from the human rights due diligence we conducted and disclosed in 2022, and are continuing to implement (see our September 2023 update here).

We leverage a combination of technology and human review teams to detect and enforce on content that violates our policies. The technologies we use include a wide range of both language-specific classifiers and language-agnostic classifiers; these include classifiers to address policy violating content in both Arabic and Hebrew languages.

Based on recommendations emerging from the independent Israel/Palestine human rights due diligence we conducted in 2022, we have taken a number of specific steps to improve our Arabic and Hebrew language classifiers. These include developing and launching a hostile speech classifier for Hebrew and expanding language identification for Arabic to recognize content in different Arabic dialects. We shared details on this work in our September 2023 Israel/Palestine Human Rights Due Diligence update.

Throughout the crisis, we have been in touch with experts in human rights and international humanitarian law to ensure that we are taking account of their expertise .

You also ask about responses to government takedown requests. We want to be clear: we do not remove content simply because a government entity (or anyone else) requests it. When we receive a content takedown request from a government entity, we review it following a consistent global process.

...

When we do restrict content in specific jurisdictions on the basis of local law, we’re transparent about our actions: we directly notify the person who posted the content as well as anyone who tries to view it but is blocked from doing so, and we also publish data on the restriction in our biannual Content Restrictions Report. Our most recent report for the second half of 2023 is available here.

As we shared in our September 2023 Israel/Palestine Human Rights Due Diligence update and most recent Quarterly Update on the Oversight Board, we are still in the process of developing consistent and reliable systems for gathering metrics on the number of pieces of content removed under the Community Standards as a result of government requests. We continue to evaluate approaches to building the necessary internal data logging infrastructure to enable us to publicly report this information across the diversity of request formats we receive, but we expect this to be a complex, long-term project

タイムライン