Asking the Hard Questions of Facebook

In a June 15 statement, Facebook described plans to reduce and remove the amount of terrorist content on its platforms, including by developing artificial intelligence capabilities, expanding its community operations team and partnering with private- and public-sector groups. Unfortunately, the announcement is bereft of specific details and raises additional questions concerning transparency and accountability.

Facebook’s announcement confirms what security and policy experts have known for some time—that terrorist and extremist content is easily available and pervasive online. It appears that as extremist and terrorist content proliferated online, Facebook’s content review and moderation teams remained persistently understaffed, and the company has only just begun experimenting with existing and new technologies in this space. Facebook should explain why it has (1) delayed in building appropriate staffing capacity and (2) delayed incorporating new and existing technologies to assist with detecting terrorist and extremist content for removal until now. Frustrated policymakers and the public deserve a clear answer.

Per Facebook’s request in its statement to “hear feedback so we can do better,” CEP has posed a series of questions and requests for additional information to better understand the tech company’s progress in detecting and removing terrorist and extremist content. We hope Facebook will follow through on its stated desire “to answer those questions head on.”

Artificial Intelligence

“Already, the majority of accounts we remove for terrorism we find ourselves.”

  • How does Facebook define terrorism?
  • How many total accounts has Facebook removed for terrorism in 2016 and 2017? How many were found by Facebook? How many were found by other parties?
  • How many total takedown notices concerning terrorism did Facebook receive in 2016 and 2017? Of these notices, how many resulted in removal of content or accounts for terrorism?
  • How many did Facebook report to government/law enforcement?

“Image matching: When someone tries to upload a terrorist photo or video, our systems look for whether the image matches a known terrorism photo or video. This means that if we previously removed a propaganda video from ISIS, we can work to prevent other accounts from uploading the same video to our site. In many cases, this means that terrorist content intended for upload to Facebook simply never reaches the platform.”

  • The problem of online radicalization has festered and grown over many years. When was Facebook’s “image matching” technology deployed for the specific purpose of flagging extremist and terrorist content?
  • How large is Facebook’s database of images?
  • Is Facebook comparing all uploads on its platforms against a database of known terrorist images?
  • How much extremist and terrorist content has been flagged using image matching? What percentage was removed and how quickly? What percentage of flagged content was not removed and why?
  • How does Facebook define, catalogue, and categorize extremist and terrorist content that it has found and removed via image matching? Is this information shared with Facebook’s family of apps and other Internet and social media platforms?
  • Does Facebook use image matching for purposes other than to detect extremist and terrorist content? For example copyright purposes?

“Language understanding: We have also recently started to experiment with using AI to understand text that might be advocating for terrorism. We’re currently experimenting with analyzing text that we’ve already removed for praising or supporting terrorist organizations such as ISIS and Al Qaeda so we can develop text-based signals that such content may be terrorist propaganda. That analysis goes into an algorithm that is in the early stages of learning how to detect similar posts. The machine learning algorithms work on a feedback loop and get better over time.”

  • When did Facebook decide to start experimenting with AI for the purposes of addressing online extremist and terrorist content?
  • Does Facebook use AI to identify and remove other forms of text, speech, and content?
  • How much text and speech “that might be advocating for terrorism” has Facebook removed in 2017 using AI? 2016?
  • What percentage of extremist and terrorist content is being detected using AI versus human detection?
  • What is the current error rate of AI detection?
  • Machine learning algorithms “get better over time” Can Facebook specifically quantify “better” and “over time”?

“Removing terrorist clusters: We know from studies of terrorists that they tend to radicalize and operate in clusters. This offline trend is reflected online as well. So when we identify Pages, groups, posts or profiles as supporting terrorism, we also use algorithms to ‘fan out’ to try to identify related material that may also support terrorism. We use signals like whether an account is friends with a high number of accounts that have been disabled for terrorism, or whether an account shares the same attributes as a disabled account.”

  • How many “Pages, groups, posts or profiles” has Facebook identified as supporting terrorism in 2017? 2016?
  • For how long has Facebook been using algorithms to identify potentially terrorism-related material and “signals” to disable extremist and terrorist accounts? Is this method the same as or in addition to image matching and language understanding?
  • How does Facebook define “material that may also support terrorism”? Is this different from how it defines extremist and terrorist content, or “text that might be advocating for terrorism?”

“Recidivism: We’ve also gotten much faster at detecting new fake accounts created by repeat offenders. Through this work, we’ve been able to dramatically reduce the time period that terrorist recidivist accounts are on Facebook. This work is never finished because it is adversarial, and the terrorists are continuously evolving their methods too. We’re constantly identifying new ways that terrorist actors try to circumvent our systems — and we update our tactics accordingly.”

  • Facebook appears to deliberately use vague, suggestive language—“started to experiment,” “begun work,” and “gotten much faster”—to describe its progress on detecting and removing terrorist and extremist content. However, these statements lack specific data or metrics to describe the scope of the problem or its success in confronting it. Facebook and other major tech companies freely leverage an abundance of data to customize the user experience and sell ads, which suggests that it could provide the public and policymakers with specific data related to online extremism. Why is Facebook being so opaque about the extent of the problem and its supposed achievements at combating it?
  • What does Facebook mean when it says that “it has gotten much faster at detecting new fake accounts” and can “dramatically reduce the time period that terrorist recidivist accounts are on Facebook”? What was the average time period previously and what is it now? What methods were used to contribute to this purported improvement?

“Cross-platform collaboration: Because we don’t want terrorists to have a place anywhere in the family of Facebook apps, we have begun work on systems to enable us to take action against terrorist accounts across all our platforms, including WhatsApp and Instagram. Given the limited data some of our apps collect as part of their service, the ability to share data across the whole family is indispensable to our efforts to keep all our platforms safe."

  • When exactly did Facebook begin work on systems to take action across all its platforms? When will such systems be completed and deployed, respectively?
  • Does Facebook and its family of apps define extremist and terrorist content in the same manner?

Human Expertise

“Reports and reviews: Our community — that’s the people on Facebook — helps us by reporting accounts or content that may violate our policies — including the small fraction that may be related to terrorism. Our Community Operations teams around the world — which we are growing by 3,000 people over the next year — work 24 hours a day and in dozens of languages to review these reports and determine the context. This can be incredibly difficult work, and we support these reviewers with onsite counseling and resiliency training.”

  • What percentage of Facebook’s overall operating budget is dedicated to Facebook’s Community Operations team and Terror and Safety Specialists?
  • What is the average salary/wage of a Community Operations analyst?
  • How many Community Operations reviewers are active presently?

“Real-world threats: We increasingly use AI to identify and remove terrorist content, but computers are not very good at identifying what constitutes a credible threat that merits escalation to law enforcement. We also have a global team that responds within minutes to emergency requests from law enforcement.”

  • What percentage of terrorist content is identified by AI? How has the percentage changed over 2015, 2016, and 2017?
  • Does Facebook have any ability to identify “credible threats that merit escalation to law enforcement”? Or does Facebook take a passive approach and wait for emergency requests from law enforcement?
  • If applicable, how often does Facebook refer threats to law enforcement authorities? Of those flagged, how many result in arrest? Does Facebook respond faster to takedown notices submitted by law enforcement and other so-called “trusted flaggers” then the general public?
  • If Facebook’s global team “responds within minutes to emergency requests from law enforcement,” why is law enforcement—particularly in Europe—so critical of Facebook’s efforts in combating terrorism?

Partnering with Others

“Industry cooperation: In order to more quickly identify and slow the spread of terrorist content online, we joined with Microsoft, Twitter and YouTube six months ago to announce a shared industry database of “hashes” — unique digital fingerprints for photos and videos — for content produced by or in support of terrorist organizations. This collaboration has already proved fruitful, and we hope to add more partners in the future. We are grateful to our partner companies for helping keep Facebook a safe place.”

  • What does Facebook mean exactly when it says that industry collaboration “has already proved fruitful”? How exactly has industry collaboration been “fruitful” in the effort to remove extremist and terrorist content? How much extremist and terrorist content has the industry identified and removed from its platforms in 2015, 2016, and 2017?
  • What is the current size of the database of hashes? How many pieces of content has each—Facebook, Microsoft, Twitter, and YouTube—contributed to the database?
  • How are content selected to be inputted into the database? What are the criteria?
  • Who has access to this database? The public? Media? Law enforcement?
  • Are the companies comparing all uploads on their platforms against the database? Is there agreement that all matches will be automatically removed?  
  • If so how much content have the companies removed as a result of the database?

“Governments: Governments and inter-governmental agencies also have a key role to play in convening and providing expertise that is impossible for companies to develop independently. We have learned much through briefings from agencies in different countries about ISIS and Al Qaeda propaganda mechanisms. We have also participated in and benefited from efforts to support industry collaboration by organizations such as the EU Internet Forum, the Global Coalition Against Daesh, and the UK Home Office.”

  • Why have governments and inter-governmental agencies been so critical of Facebook and other tech companies?

“Encryption. We know that terrorists sometimes use encrypted messaging to communicate. Encryption technology has many legitimate uses – from protecting our online banking to keeping our photos safe. It’s also essential for journalists, NGO workers, human rights campaigners and others who need to know their messages will remain secure. Because of the way end-to-end encryption works, we can’t read the contents of individual encrypted messages — but we do provide the information we can in response to valid law enforcement requests, consistent with applicable law and our policies.”

  • How often does Facebook refuse to comply with U.S. and European law enforcement requests that it views to be inconsistent with Facebook's policies?

“Counterspeech training: We also believe challenging extremist narratives online is a valuable part of the response to real world extremism. Counterspeech comes in many forms, but at its core these are efforts to prevent people from pursuing a hate-filled, violent life or convincing them to abandon such a life. But counterspeech is only effective if it comes from credible speakers. So we’ve partnered with NGOs and community groups to empower the voices that matter most.

“Partner programs: We support several major counterspeech programs.”

  • If Facebook is involved in the creation of original content for promotion and dissemination on its platforms, can it credibly be characterized as a neutral third-party Internet actor?
  • How many individuals has Facebook successfully turned away from extremism/terrorism using counterspeech content?
  • How many resources has Facebook invested in identifying “credible speakers” and supporting counterspeech-type efforts to combat gang recruitment, bullying and harassment, “doxing”, revenge porn, racism and fake news and other areas not related to extremism and terrorism on its platforms?

To read Facebook’s June 15 statement in full, please click here.

Daily Dose

Extremists: Their Words. Their Actions.

Fact:

On October 7, 2023, Hamas invaded southern Israel where, in the space of eight hours, hundreds of armed terrorists perpetrated mass crimes of brutality, rape, and torture against men, women and children. In the biggest attack on Jewish life in a single day since the Holocaust, 1,200 were killed, and 251 were taken hostage into Gaza—where 101 remain. One year on, antisemitic incidents have increased by record numbers. 

View Archive