The Challenges of Moderating Livestreamed Violent Extremist Content

Summary:

In 2022, an 18-year-old man committed a racist attack at a grocery store in Buffalo, New York, murdering ten people and injuring three others. The perpetrator of the attack livestreamed the shooting on Twitch and, before the shooting, hosted an online space on Discord in which he shared the link to the stream and alerted users of the forthcoming murders. He also posted a screed about his racist beliefs, his plan to commit the violent attack, and a description of the weapons he would be using. Twitch was able to take down the stream within two minutes. However, at least one viewer made a copy of the livestream and shared it on other online platforms, which allowed the content to circulate, amplifying its reach.

Background

Over the years, online platforms have been proactively investigating and implementing effective ways to stem terrorist and violent extremist content. However, the volume of content that platforms deal with today is such that it has become an increasingly complex issue. Additionally, the kind of platform used and the means by which terrorist and violent extremist content is shared may add another layer of complexity: for instance, there are significant differences with regards to content moderation between audio, video, and text, as well as between livestream and recorded audio or video. Challenges and the resources needed to tackle them vary widely. Moreover, this kind of content presents unique and potentially fatal offline harms, so both online and offline components should be considered when addressing terrorist and violent extremist content.

Each platform has their own community guidelines or policies, which vary depending on the different types of services they offer. Also, the specific definition of what qualifies as violent extremist content and the consequences of sharing it may vary on each platform. Platforms usually take down videos, photos, audios, or text of an attack itself, as well as manifestos or other types of content related to a terrorist or violent extremist attack. In general, platforms try to prevent terrorist or violent extremist content from spreading not only to protect users but also to avoid publicizing violent messages that could inspire others to commit similar attacks.

In 2017, several online platforms founded the Global Internet Forum to Counter Terrorism (GIFCT), a group designed to prevent terrorists and violent extremists from exploiting digital platforms. GIFCT was created to foster technical collaboration among member companies, advance relevant research, and share knowledge with smaller platforms and, since 2017, its membership has since expanded to more than 20 platforms. GIFCT has also evolved in conjunction with the Christchurch call to action. In 2019, the founding companies announced that GIFCT would become an independent organization with its own dedicated technology, counterterrorism, and operations teams.

GIFCT also created a hash-sharing database that enables members to quickly identify terrorist and violent extremist content and take appropriate steps to limit or prevent its circulation. In addition, GIFCT members developed the Content Incident Initiative (CIP), a centralized communications mechanism by which member companies quickly become aware of, assess, and address potential content circulating online resulting from an offline terrorist or violent extremist event. The conclusion of a CIP requires GIFCT members to confirm that both the volume of content and the potential impact of such content has noticeably decreased. The CIP has been activated four times: 1) In 2019, in response to the attack in Halle, Germany; 2) in 2020, in response to the attack in Glendale, Arizona, 3) in May 2022, in response to the attack in Buffalo, New York, and 4) in September, 2022, in response to a shooting in Memphis, Tennessee.

Some members of civil society organizations and academia have raised concerns around GIFCT over a lack of transparency and supervision mechanisms, the potential negative impacts of the hash database on the right to freedom of speech, and its potential for censorship purposes.

The Case

In May 2022, an 18-year-old man committed a racist attack at a grocery store in Buffalo, New York, murdering ten people and injuring three others. The day before the shooting, he hosted an online space on the chat service Discord. According to the company, the shooter created a private, invite-only server that he used as a “personal diary chat log.” He posted a link to his Twitch stream, a link to a 180-page screed about his racist beliefs, his plan to commit violence, and a description of the weapons he would use. A bulletin published jointly by the Department of Homeland Security, the Federal Bureau of Investigation, and the National Counterterrorism Center stated that the shooter documented his pre-attack planning process and tactics in the 180-page screed posted online, and that he also kept a 672-page online diary “intended to serve as a manual for future attackers.”

The shooter had also posted on Discord “HAPPENING: THIS IS NOT A DRILL,” referring to the upcoming shooting. The broadcast on Twitch—which lasted about 25 minutes—showed him driving and talking to himself until he pulled in front of the market, opened his car door, and killed a woman who was walking outside the store. The video then continued as he moved inside the grocery store and began shooting.

In the screed the perpetrator stated that “live-streaming this attack gives me some motivation in the way that I know that some people will be cheering for me” and indicated he wanted to livestream the video to help “increase coverage and spread my beliefs.” Additionally, he noted he had tested streaming to Twitch in March, saying he had hoped the stream would not be canceled “before I do anything interesting.” According to the screed, he decided to broadcast on Twitch because a 2019 shooting at Halle Synagogue had remained live on the platform for over 30 minutes before it was taken down; he opted not to use Facebook because users would need to be logged in to watch his livestream.

According to screenshots of the footage, the livestream had 22 simultaneous views at its peak, but Twitch was able to take down the stream within two minutes of the shooting starting. However, at least one viewer made a copy of the livestream and shared it on other online platforms. Thus, footage of the terrorist attack started circulating on other platforms, and videos of the attack were viewed by millions of people. A link to the footage posted on Streamable—one of the many alternatives to mainstream platforms—was shared 46,000 times on Facebook and remained on the platform for more than 10 hours.

The Response

According to Twitch’s statement after the attack, the livestream was taken down two minutes after the shooting started. The statement noted they “identified and removed the stream less than two minutes after the violence began, and permanently banned the user from our service” and that they “are taking all possible action to stop the footage and related content from spreading on Twitch, including monitoring and removing accounts or content rebroadcasting footage of the incident.” The statement also explained that Twitch was working closely with several law enforcement agencies and collaborating with other tech companies through GIFCT to share relevant information and limit the spread of this footage online.

However, the footage was available and widely circulated on both Facebook and Twitter. Some Facebook users flagged the video and were notified that the content did not violate the platform’s rules. Facebook explained this was a mistake and that teams were working around the clock to remove the videos—as well as links to the video hosted on other sites. Yet, the video remained available for a few hours. On Twitter, multiple videos of the shooting were accessible for several days, with one having over 261,000 views.

In response, GIFCT activated its CIP, and alerted GIFCT members, enabling them to share hashes of the perpetrator-produced content depicting the attack along with content featuring the attacker’s manifesto. Additionally, GIFCT alerted the U.S. government and GIFCT’s Independent Advisory Committee. According to GIFCT, “between when GIFCT activated the CIP and its conclusion, members added approximately 870 visually distinct items to the GIFCT hash-sharing database.”

Insights

Live streaming implies unique content moderation challenges because identifying violent extremist content and preventing users from seeing it is not an easy task, especially at scale. This specific case has several factors worth analyzing. First, although there have been other similar violent extremist attacks also livestreamed online, this is one of the most recent and, according to the perpetrator, it was inspired by previous livestreamed events. Second, not only did the shooter livestream the attack, he also used different platforms to broadcast and publish documents related to it ahead of time. Finally, as this case illustrates, quickly removing the original instance of a livestreamed event may reduce but does not necessarily mitigate harm as recordings may still circulate on other platforms.

Company considerations:

  • What might have been a more effective and scalable alternative measure to prevent the circulation of the video on other platforms?
  • How can companies share lessons learned and better prepare for this type of incidents at all stages of growth? Resources like Tech Against Terrorism’s Knowledge Database, for example, are primed around smaller tech companies. How might similar resources be helpful for larger companies?
  • How might companies be more transparent with their users and other stakeholders about their work to tackle violent extremist content without compromising privacy and safety on their platforms?
  • How might users report violent extremist content in a way that sends a strong signal to platforms?
  • How might user education be useful for combating violent extremist content? For instance, knowing reporting systems better to escalate this type of content as soon as possible; recognize terrorist or violent extremist content relative to other types of harm; or any user-level reporting to law enforcement for live incidents they see occurring.

Issue considerations:

  • How might GIFCT collaborate with companies that are not GIFCT members to improve results?
  • What relevant clues should be considered and what preventive measures can be taken to identify the livestreaming of imminent attacks?
  • How might platforms collaborate in preventing violent extremist attacks from happening without taking down content that is legal and permitted? 
  • How could technologies that do image/video-recognition help in combating the spread of recorded violent extremist content? 
  • How might platforms safeguard violent content that is of public interest without compromising user safety?

Considerations for policymakers:

  • How might government agencies collaborate more closely and effectively with companies to identify and prevent violent extremist content from circulating? How might they work together to prevent violent attacks from happening?
  • How do we account for the spread of violent extremist content internationally? Are there policy or collaborative opportunities that cross borders to tackle this type of issues and content spread?

Written by Maia Levy Daniel, October 2023. Special thanks to Michael Swenson and James Alexander for feedback.