Skip to content
Trust and Safety Foundation Project
  • About Us
    • Team
    • Supporters
  • Explore
    • Case Studies
  • Get Involved
    • Case Study Club
  • T&S Research Conference
  • TSPA
  • Contact Us
  • About Us
    • Team
    • Supporters
  • Explore
    • Case Studies
  • Get Involved
    • Case Study Club
  • T&S Research Conference
  • TSPA
  • Contact Us

Moderation

Cloudflare and The Daily Stormer: Content Moderation Meets the Stack

Summary: In 2017, Cloudflare, a web infrastructure and website security company, came under media scrutiny for hosting a U.S. based, self-proclaimed neo-Nazi website, The Daily Stormer. Despite numerous complaints to […]

Discord adds AI moderation to help fight abusive content (2021)

Summary: In the six years since Discord debuted its chat platform, it has seen explosive growth. And, over the past half-decade, Discord’s chat options have expanded to include GIFs, video, audio, […]

Facebook suspends account for showing topless aboriginal women (2016)

Summary: Facebook’s challenges of dealing with content moderation around “nudity” have been covered many times, but part of the reason the discussion comes up so often is that there are so many scenarios […]

Snapchat disables GIPHY integration after racist “sticker” is discovered (2018)

Summary: Snapchat debuted to immediate success a decade ago, drawing in millions of users with its playful take on instant messaging that combined photos and short videos with a large […]

Facebook responds to a live-streamed mass shooting (March 2019)

Facebook grapples with removing live-streamed content in real-time, while also preventing or reactively removing re-uploads of violating content. Summary: On March 15, 2019, the unimaginable happened. A Facebook user — utilizing […]

Twitter acts to remove accounts for violating the terms of service by buying/selling engagement (March 2018)

Users exploit Tweetdeck to artificially boost content virility, prompting an evaluation of policies and enforcement against spam. Summary: After an investigation by BuzzFeed uncovered several accounts trafficking in paid access to […]

Social media services respond when recordings of shooting are uploaded by the person committing the crimes (August 2015)

Platforms respond to a shooting video and its rapid proliferation. Summary: The ability to instantly upload recordings and stream live video has made content moderation much more difficult. Uploads to […]

Moderation of racist content leads to removal of non-racist pages/posts (2020)

Facebook removes anti-racist content based on racist terms like “skinhead”, due to lack of context consideration Summary: Social media platforms are constantly seeking to remove racist, bigoted, or hateful content. […]

Trust and Safety Foundation Project
  • About Us
    • Team
    • Supporters
  • Explore
    • Case Studies
  • Get Involved
    • Case Study Club
  • T&S Research Conference
  • TSPA
  • Contact Us

© Trust and Safety Foundation Project. Proudly powered by WordPress. Hosted by Pressable.

 

Loading Comments...