You're browsing our English site, so by default we are only showing content in English. If you'd prefer to view all available content regardless of language, please change this switch.
You're browsing our English site, so by default we are only showing content in English. If you'd prefer to view all available content regardless of language, please change this switch.
"Meta censors pro-Palestinian views on a global scale, report claims", 21 December 2023
Meta has engaged in a “systemic and global” censorship of pro-Palestinian content since the outbreak of the Israel-Gaza war on 7 October, according to a new report from Human Rights Watch (HRW).
In a scathing 51-page report, the organization documented and reviewed more than a thousand reported instances of Meta removing content and suspending or permanently banning accounts on Facebook and Instagram.
Examples it cites include content originating from more than 60 countries, mostly in English, and all in “peaceful support of Palestine, expressed in diverse ways”. Even HRW’s own posts seeking examples of online censorship were flagged as spam, the report said.
“Censorship of content related to Palestine on Instagram and Facebook is systemic and global [and] Meta’s inconsistent enforcement of its own policies led to the erroneous removal of content about Palestine,” the group said in the report, citing “erroneous implementation, overreliance on automated tools to moderate content, and undue government influence over content removals” as the roots of the problem.
In a statement to the Guardian, Meta acknowledged it makes errors that are “frustrating” for people, but said that “the implication that we deliberately and systemically suppress a particular voice is false. Claiming that 1,000 examples, out of the enormous amount of content posted about the conflict, are proof of ‘systemic censorship’ may make for a good headline, but that doesn’t make the claim any less misleading.
Meta said it was the only company in the world to have publicly released human rights due diligence on issues related to Israel and Palestine.
Part of the following timelines
Tech companies criticized for their complicity & bias against Palestinians regarding the Gaza conflict
According to a recent Human Rights Watch (HRW) report, Meta has allegedly censored pro-Palestinian views systematically and globally. The report documents over a thousand instances of content removal from more than 60 countries, highlighting Meta's inconsistent policy enforcement and reliance on automated tools.
US Senator Elizabeth Warren has demanded answers from Mark Zuckerberg regarding allegations of Meta censoring pro-Palestinian content. Citing concerns from human rights groups and media reports, Warren seeks clarification on moderation practices and recent incidents, with a response deadline of January 5, 2024.
Prominent disinformation researcher Dr. Joan Donovan accuses Meta of influencing Harvard to terminate her research project on media manipulation. Donovan alleges Harvard bowed to pressure after a Chan Zuckerberg Initiative donation, impacting her work and academic freedom.
Meta's Oversight Board will address its first emergency cases involving posts on the Israel-Palestine ongoing conflict. The board will evaluate Meta's crisis response and the implementation of the board's more recent recommendations and the company's commitments. The decisions expected within 30 days may prompt Meta to reassess its approach to handling conflicts on its platform.
According to a Wired report, Palestinians are excluded from Google's online economy. The YouTube Partner Program is unavailable in Palestine, hindering economic opportunities and sparking allegations of discrimination.
7amleh, a civil society organization, reportedly tested Meta's content moderation standards on Facebook, revealing the alleged approval of paid ads containing hate speech and incitement of violence against Palestinians. The failure of Facebook's automated moderation process raises concerns about Meta's ability to curb harmful content targeting Palestinians.
A new report from the Centre for Countering Digital Hate (CCDH) reveals that X is failing to adequately moderate antisemitic and islamophobic hate speech amid the Israel-Palestine conflict. The study indicates a significant portion of hate speech remains online, including posts inciting violence and promoting conspiracy theories.
According to a recent article by Mona Shtaya, Meta's platforms allegedly normalize anti-Palestinian racism, contributing to dehumanization and violence. She emphasizes that Meta's reluctance to safeguard its users perpetuates anti-Palestinian racism.
Adobe receives criticism for selling AI-generated images that may be contributing to misinformation about the ongoing conflict in Israel and the Occupied Palestinian Territories. AI-generated images should be clearly labeled as such, but reporters have uncovered that this is not always happening, further polluting an already murky online information environment.
According to a recent report by The Guardian, WhatsApp's AI sticker generator has produced racist images against Palestinians, including gun-wielding children, in response to searches related to 'Palestinian' or 'Palestine,' in contrast to other explicit terms searched on the generator. Australian Senator Mehreen Faruqi is calling for an investigation into the racist and Islamophobic imagery produced by Meta.
7amleh launched a report on the Palestinian digital rights situation since October 7, 2023. The report emphasizes the urgent need for tech companies and international action. It highlights disturbing trends observed on online platforms due to the communications blackout in Gaza. Furthermore, the report includes violations resulting from Israeli government actions based on social media activity.
Following the anti-Semitic airport rampage in the Dagestan Region in Russia, Telegram announced a ban on channels that incite anti-Semitic riots in Dagestan.
Morning Dagestan, a Telegram channel, has allegedly been linked to inciting an anti-Semitic airport rampage in Russia. Crowds followed its instructions, resulting in injuries and a breach of security at the airport.
A recent Reuters report reveals that pro-Israel graphic ads have been observed in video games across Europe. According to the report, the ad videos, which carried footage of rocket attacks, a fiery explosion, and masked gunmen, have been shown to gamers, including several children.
As reported by The Guardian, Meta faces allegations of locking a prominent Pro-Palestinian Instagram account for 'security concerns.' The digital rights group 7amleh claims that it has documented 238 recent censorship cases across Meta's platforms and alleges that there is a troubling pattern of silencing Palestinian voices while permitting hate speech and incitement to violence against Palestinians.
According to a study from NewsGuard, 74% of the most viral posts promoting misinformation about the Israel-Hamas conflict during its first week were spread by X Premium accounts.
According to a recent article on the Tahrir Institute for Middle East Policy (TIMEP) website, during the Israel and the OPT conflict, social media companies have displayed disparities in their response to protect users, particularly in contrast to measures taken for users in the Global North. TIMEP urges stakeholders to pressure companies for more equitable digital protection measures and increased investment in safeguarding users.
In response to the Israeli Defense Forces' request, Apple has disabled live traffic data in Israel and Gaza, following similar actions by Google and Waze. This move eliminates traffic tracking options in the region, raising concerns about implications for grassroots movements.
Growing concerns among homeland security experts in the United States over the spread of extremist ideologies during the Israel-Hamas war through hateful online rhetoric. The increase in hate speech and incitement speech put leaders in the Arab and Jewish communities in the U.S. on high alert.
Digital and human rights organisations joined the open call for an immediate ceasefire to end the ongoing bloodshed in Gaza, to halt a humanitarian catastrophe, and to prevent further loss of innocent lives in Palestine, Israel, Lebanon, and beyond. The organisations further called on governments, international institutions, tech companies, and other international stakeholders to take responsibility for their actions.
Meta has apologised after inserting the word “terrorist” into the profile bios of some Palestinian Instagram users, in what the company says was a bug in auto-translation.
The Institute for Human Rights and Business (IHRB) released a commentary piece, exploring the responsibilities media companies bear during times of conflict, including traditional and social media companies. According to IHBR, media companies have human rights responsibilities, requiring heightened due diligence during this context to ensure information accuracy and protect vulnerable populations.
Users allege that posts and accounts, including media accounts, supportive of Palestinians on Facebook and Instagram, owned by Meta are being suppressed, removed or hidden. Company responded by stating that some of the posts were affected because of an accidental bug or technical difficulties and that some posts are temporarily supressed as it enacts measures to deal with a high number of reports of graphic content.
Instagram and Facebook users allege censorship of Gaza hospital bombing image, despite past policies supporting newsworthy content. As reported by The Intercept, users claim that the supression or restriction on content is based on sexual or nudity grounds.
Following the escalation of the recent conflict, Israel intensified its online campaign using graphic content on social media ads on X and YouTube to shape global opinion.
Electronic Frontier Foundation, a civil society organisation called on tech companies to better handle misinformation during conflict and set forth specific recommendations in the context of the conflict in Gaza.
The Arab Center Washington DC warns about the alarming surge in disinformation and hate speech on social media platforms. According to the civil society organization, these contents fuel war crimes against Gaza and the silence of Palestinian voices.
According to reports, disinformation has proliferated on the social media platforms X and TikTok since the militant Islamist group Hamas initiated its attack on Israel. The civil society organization 7amleh argues that this phenomenon results in Palestinian narratives being censored or not heard and leads to further calls for violence.
Investigative human rights reports allege Meta censored Palestinian content amid current siege and bombardment of Gaza and previous escalations; incl. co. response
According to a recent Human Rights Watch (HRW) report, Meta has allegedly censored pro-Palestinian views systematically and globally. The report documents over a thousand instances of content removal from more than 60 countries, highlighting Meta's inconsistent policy enforcement and reliance on automated tools.
Meta’s content moderation policies and systems have increasingly silenced voices in support of Palestine on Instagram and Facebook in the wake of the hostilities between Israeli forces and Palestinian armed groups, Human Rights Watch said in a report.
Human Rights Watch found that Facebook removed content by Palestinians and their supporters, including about human rights abuses carried out in Israel and Palestine during the May 2021 hostilities, and that the company’s acknowledgment of errors and attempts to correct them are insufficient.