abusesaffiliationarrow-downarrow-leftarrow-rightarrow-upattack-typeburgerchevron-downchevron-leftchevron-rightchevron-upClock iconclosedeletedevelopment-povertydiscriminationdollardownloademailenvironmentexternal-linkfacebookfiltergenderglobegroupshealthC4067174-3DD9-4B9E-AD64-284FDAAE6338@1xinformation-outlineinformationinstagraminvestment-trade-globalisationissueslabourlanguagesShapeCombined Shapeline, chart, up, arrow, graphLinkedInlocationmap-pinminusnewsorganisationotheroverviewpluspreviewArtboard 185profilerefreshIconnewssearchsecurityPathStock downStock steadyStock uptagticktooltiptwitteruniversalityweb

Cette page n’est pas disponible en Français et est affichée en English

Article

11 Oct 2023

Auteur:
Natasha Lomas, TechCrunch

EU warns Meta over illegal content & disinformation targeting the Israel-Hamas war

"EU also warns Meta over illegal content, disinfo targeting Israel-Hamas war", 11 October 2023

The European Union has expanded its warning about illegal content and disinformation targeting the Israel-Hamas war circulating on social media platforms to Meta, the parent company of Facebook and Instagram.

...the bloc’s internal market commissioner, Thierry Breton, published an urgent letter to Elon Musk, owner of X (formerly Twitter) — raising concerns the platform is being used to disseminate illegal content and spread potentially harmful disinformation in the wake of Saturday’s surprise attacks on Israel by Hamas terrorists based in the Gaza Strip.

Breton’s letter to Meta’s founder Mark Zuckerberg, which he’s also made public via a post on X, is a little less urgent in tone than yesterday’s missive to Musk. But the social media giant has also been given 24 hours to respond to the EU’s concerns about the same sorts of content risks.

Meta was contacted for a response to Breton’s warning, and to ask about the steps it’s taking to ensure it can respond effectively to content risks related to violent events in Israel and Gaza, but at the time of writing it had not responded.

Update: Meta has now emailed us this statement, attributed to a company spokesperson:

After the terrorist attacks by Hamas on Israel on Saturday, we quickly established a special operations center staffed with experts, including fluent Hebrew and Arabic speakers, to closely monitor and respond to this rapidly evolving situation. Our teams are working around the clock to keep our platforms safe, take action on content that violates our policies or local law, and coordinate with third-party fact checkers in the region to limit the spread of misinformation. We’ll continue this work as this conflict unfolds.

Update 2: Meta has also published a blog post with more details of measures it’s taking in response to risks arising out of the Israel-Hamas war, such as hashtag blocking and imposing restrictions on Facebook Live and Instagram Live for people who have previously violated certain policies.

Since... bloody attacks, there have been reports of graphic videos being uploaded to Meta platforms. In one report on Israeli television, which has been recirculating in a clip shared to social media, a woman recounted how she and her family had learned that her grandmother had been murdered by Hamas terrorists after they took a video of her dead body with her phone and uploaded to her Facebook account.

Eye on election disinformation

The bloc’s letter to Meta is not solely focused on risks arising from the Israel-Hamas war. It also reveals the Commission is worried Meta is not doing enough to deal with disinformation targeting European elections.

The DSA, a pan-EU content moderation-focused regulation, applies the deepest obligations and governance controls to larger platforms (so-called Very Large Online Platforms, or VLOPs) — 19 of which were designated by the Commission back in April, including Meta-owned Facebook and Instagram — with extra requirements to assess and mitigate systemic risks attached to the use of algorithms and AIs. This means VLOPs are expected to be proactive about identifying and mitigating systemic risks such as political disinformation, in addition to swiftly acting on reports of illegal content such as terrorism.

Penalties for a confirmed breach of the regime include fines of up to 6% of global annual turnover — which, in Meta’s case, could mean a fine of several billions.

Political deepfakes have emerged as a particular area of concern for the Commission, as developments in generative AI have made it cheaper and easier to produce this type of disinformation.

Chronologie