Facebook responds to a live-streamed mass shooting (March 2019)

Facebook grapples with removing live-streamed content in real-time, while also preventing or reactively removing re-uploads of violating content.

Summary:

On March 15, 2019, the unimaginable happened. A Facebook user — utilizing the platform’s live-streaming option — filmed himself shooting mosque attendees in Christchurch, New Zealand.

By the end of the shooting, the shooter had killed 51 people and injured 49. Only the first shooting was live-streamed, but Facebook was unable to end the stream before it had been viewed by a few hundred users and shared by a few thousand more.

The stream was removed by Facebook almost an hour after it appeared, thanks to user reports. The moderation team began working immediately to find and delete re-uploads by other users. Violent content is generally a clear violation of Facebook’s terms of service, but context does matter. Not every video of violent content merits removal, but Facebook felt this one did.

The delay in response was partly due to limitations in Facebook’s automated moderation efforts. As Facebook admitted roughly a month after the shooting, the shooter’s use of a head-mounted camera made it much more difficult for its AI to make a judgment call on the content of the footage.

Facebook’s efforts to keep this footage off the platform continue to this day. The footage has migrated to other platforms and file-sharing sites — an inevitability in the digital age. Even with moderators knowing exactly what they’re looking for, platform users are still finding ways to post the shooter’s video to Facebook. Some of this is due to the sheer number of uploads moderators are dealing with. The Verge reported the video was re-uploaded 1.5 million times in the 48 hours following the shooting, with 1.2 million of those automatically blocked by moderation AI.

CNN newscast

Decisions to be made by Facebook:

  • Should the moderation of live-streamed content involve more humans if algorithms aren’t up to the task?
  • When live-streamed content is reported by users, are automated steps in place to reduce visibility or sharing until a determination can be made on deletion?
  • Will making AI moderation of livestreams more aggressive result in overblocking and unhappy users?
  • Do the risks of allowing content that can’t be moderated prior to posting outweigh the benefits Facebook gains from giving users this option?
  • Is it realistic to “draft” Facebook users into the moderation effort by giving certain users additional moderation powers to deploy against marginal content?

Questions and policy implications to consider:

  • Given the number of local laws Facebook attempts to abide by, is allowing questionable content to stay “live” still an option?
  • Does newsworthiness outweigh local legal demands (laws, takedown requests) when making judgment calls on deletion?
  • Does the identity of the perpetrator of violent acts change the moderation calculus (for instance, a police officer shooting a citizen, rather than a member of the public shooting other people)?
  • Can Facebook realistically speed up moderation efforts without sacrificing the ability to make nuanced calls on content?

Resolution:

Facebook reacted quickly to user reports and terminated the livestream and the user’s account. It then began the neverending work of taking down uploads of the recording by other users. It also changed its rules governing livestreams in hopes of deterring future incidents. The new guidelines provide for temporary and permanent bans of users who livestream content that violates Facebook’s terms of service, as well as prevent these accounts from buying ads. The company also continues to invest in improving its automated moderation efforts in hopes of preventing streams like this from appearing on users’ timelines.


Written by The Copia Institute, July 2020

Copia logo