Facebook’s moderation of terrorist content results in the removal of journalists’ and activists’ accounts (June 2020)

Facebook thumbs up with barbed wire around it

Aggressive algorithmic moderation of terrorism-related content leads to the removal of newsworthy and investigative material, harming journalists, activists, and other non-terrorist sources of content.

Summary:

In almost every country in which it offers its service, Facebook has been asked — sometimes via direct regulation — to limit the spread of “terrorist” content.

But moderating this content has proven difficult. It appears the more aggressively Facebook approaches the problem, the more collateral damage it causes to journalists, activists, and others studying and reporting on terrorist activity.

Because documenting and reporting on terrorist activity necessitates posting of content considered to be “extremist,” journalists and activists are being swept up in Facebook’s attempts to purge its website of content considered to be a violation of terms of service, if not actually illegal.

Over the last two months Facebook has deleted at least 35 accounts of Syrian journalists and activists, according to the Syrian Archive, a database of documentary evidence of human rights violations and other crimes committed by all sides of the conflict in Syria sourced mostly from social media.

The same thing happened in another country frequently targeted by terrorist attacks.

In the space of one day, more than 50 Palestinian journalists and activists had their profile pages deleted by Facebook, alongside a notification saying their pages had been deactivated for “not following our Community Standards.”

“We have already reviewed this decision and it can’t be reversed,” the message continued, prompting users to read more about Facebook’s Community Standards.

There appears to be no easy solution to Facebook’s over-moderation of terrorist content. With algorithms doing most of the work, it’s left up to human moderators to judge the context of the posts to see if they’re glorifying terrorists or simply providing information about terrorist activities.

Decisions to be made by Facebook:

  • How do you define “terrorist” or “extremist” content?
  • Does allowing terrorist content to stay up in the context of journalism or activism increase the risk it will be shared by those sympathetic/supportive of terrorists?
  • Should moderated accounts be allowed to challenge takedowns of terrorist content or the deactivation of their accounts?
  • Does aggressive moderation of terrorist content result in additional unintended harms, like the removal of war crime evidence?

Questions and policy implications to consider:

  • Would providing more avenues for removal challenges and/or additional transparency about moderation decisions result in increased government scrutiny of moderation decisions?
  • Can this collateral damage be leveraged to push back against government demands for harsher moderation policies by demonstrating the real world harms of over-moderation?
  • Does this aggressive moderation allow the terrorists to “win” by silencing the journalists and activists who are exposing their atrocities?
  • Could Facebook face sanctions/fines for harming journalists and activists and their efforts to report on acts of terror?

Resolution: 

Facebook continues to struggle to eliminate terrorist-linked content from its platform. It appears to have no plan in place to reduce the collateral damage caused by its less-than-nuanced approach to a problem that appears — at least at this point — unsolvable. In fact, it’s own algorithms have generated extremist content by auto-generating “year in review” videos utilizing “terrorist” content uploaded by users, but apparently never removed by Facebook.

Facebook’s ongoing efforts with the Global Internet Forum to Counter Terrorism (GIFCT) probably aren’t going to limit the collateral damage to activists and journalists. Hashes of content designated “extremist” are uploaded to GIFCT’s database, making it easier for algorithmic moderation to detect and remove unwanted content. But utilizing hashes and automatic moderation won’t solve the problem facing Facebook and others: the moderation of extremist content uploaded by extremists and similar content uploaded by users who are reporting on extremist activity. The company continues to address the issue, but it seems likely this collateral damage will continue until more nuanced moderation options are created and put in place.


Written by The Copia Institute, June 2020