EU: Commission releases new recommendations for tech co's to combat extremist content online
All components of this story
Author: Samuel Gibbs, The Guardian (UK)
In the wake of the multiple sexual harassment and abuse scandals across the globe, Facebook has been suspending women for “hate speech” against men after posting variations of the phrase “men are scum”.
Despite Facebook’s chief operating officer Sheryl Sandberg warning of a potential backlash against women as scandals rock companies and political institutions, the social network continues to ban women speaking out against men as a group...
Facebook says that threats and hate speech directed towards a protected group violate its community standards and therefore are removed. The social network told the Daily Beast that “men are scum” was a threat and therefore should be removed...
A Facebook spokesperson told the Guardian: “We understand how important it is for victims of harassment to be able to share their stories and for people to express anger and opinions about harassment — we allow those discussions on Facebook. We draw the line when people attack others simply on the basis of their gender.”
Author: Sam Levin, The Guardian (UK)
Google is hiring thousands of new moderators after facing widespread criticism for allowing child abuse videos and other violent and offensive content to flourish on YouTube.
...The news from YouTube’s CEO, Susan Wojcicki, followed a steady stream of negative press surrounding the site’s role in spreading harassing videos, misinformation, hate speech and content that is harmful to children.
Wojcicki said that in addition to an increase in human moderators, YouTube is continuing to develop advanced machine-learning technology to automatically flag problematic content for removal. The company said its new efforts to protect children from dangerous and abusive content and block hate speech on the site were modeled after the company’s ongoing work to fight violent extremist content...
The statement also said YouTube was reforming its advertising policies, saying it would apply stricter criteria, conduct more manual curation and expand its team of ad reviewers.
Author: Samuel Gibbs, The Guardian (UK)
The European Commission has warned Facebook, Google, YouTube, Twitter and other internet technology companies that they must do more to stem the spread of extremist content or face legislation.
Growing pressure from European governments has meant progress has been made by companies in significantly boosting their resources dedicated to help take down extremist content as quickly as possible.
But... [i]f the EU is not satisfied with the further progress on the removal of extremist content by technology companies, which are primarily based in the US, it said it will come forward with legislation next year to force the issue...
A group of technology companies pooling resources to combat extremist content called the Global Internet Forum – a group which includes Microsoft, Facebook, Twitter and YouTube – said that progress had been made
Author: Peter Teffer, EU Observer
The European Commission adopted a legal text on Thursday (1 March) which gives online platforms like Facebook and Twitter guidelines on when and how to take down illegal content like hate speech or terrorist propaganda...
The text, presented by no fewer than four EU commissioners, can be seen as something of a last chance for internet companies to make self-regulation work.
Security commissioner Julian King said the commission would monitor how the recommendation will play out in practice, and that the recommendation was about "sending a clear signal" to the internet companies...
The commissioners also stressed that the recommendation contained safeguards for protecting freedom of speech.
Users of platforms that have posted something which they believe is legal, should be able to appeal a company's decision to take down their content.
Author: Thuy Ong, The Verge
The European Commission has sent out expansive guidelines aimed at Facebook, Google, and other tech companies on removing terrorist and other illegal content online. The commission outlined recommendations, which apply to all forms of illegal content, including terrorist media, child sexual abuse, counterfeit products, copyright infringement, and material that incites hatred and violence. The recommendations also specify clearer procedures, more efficient tools, and stronger safeguards including human oversight and verification, so something that’s incorrectly flagged can be restored...
The commission is suggesting these operational measures as a soft law before it decides whether or not to propose legislation. The recommendations are non-binding, but they can still be used as legal references in court...
Facebook previously said it wants to be a “hostile place” for terrorists and is using a mix of AI and human intervention to root out terrorist content. YouTube also announced new steps last year including automated systems and additional flaggers to fight extremism on its platform. In 2016, Facebook, Twitter, Microsoft, and YouTube signed an EU code of conduct on countering hate speech online.