Under Fire From EU Regulators and Advertisers, Google Discusses Fighting Terrorism Online

The Counter Extremism Project (CEP) issued the following statement today in response to Google’s announcement of its plans to fight terrorism online:

Google pledged in a June 18 blog post to increase the use of technology to remove content that violates its policies; introduce warning labels on certain inflammatory content in lieu of removing it; and expand its counter-radicalization efforts through public- and private-sector partnerships and targeted advertising tools. The blog post, which also appeared as a Financial Times op-ed, follows months of continuing pressure from EU regulators as well as advertisers, who pulled business from Google-owned YouTube after discovering that ads were appearing alongside terror-inciting and hate-filled videos.

Google’s announcement was expected, given the continuing loss of advertising revenue and the prospects of increased regulation by the EU. Nevertheless, the blog post still lacks specifics and does little to allay concerns about the presence of extremist and terrorist content on YouTube as well as the credibility and efficacy of Google’s ongoing and proposed new initiatives.

  • Google fails to specifically discuss what action has been taken—if any—to remove known propagandists from YouTube. For example, Khuram Shazed Butt—the leader of London Bridge attackers—was reportedly radicalized and driven to violence in part by online lectures from extremist cleric Ahmad Musa Jibril. A search for “Ahmad Musa Jibril” yielded 14,700 hits and his 130 YouTube videos have amassed more than 1.5 million views. A search for deceased American-born radical cleric and al-Qaeda operative Anwar al-Awlaki, who radicalized dozens of U.S. and EU terrorists who later carried out attacks, yielded 80,300 results as of June 5, 2017. Google, first and foremost, owes a concerned public and U.S. and European policymakers clear, comprehensive explanations regarding its action or inaction on Awlaki and Jibril videos.
  • Google proposes affixing warning labels to videos with “inflammatory religious or supremacist content,” but does not directly address how this new policy will affect non-explicitly violent terrorist and extremist videos that have proven connections to violent actors and incidents (e.g. lectures by Jibril and Awlaki and bomb-making instructional videos). It is also unclear how Google will differentiate between content that promotes terrorism (and should be removed) and so-called "inflammatory" content which will remain on its platform, albeit affixed with warning labels. Google should also clarify if this approach (i.e. affixing warning labels to certain inflammatory content) has a precedent with other forms of content and speech on its platforms. If not, Google should explain why it is going to such great lengths to keep terrorist and extremist content on its platforms while removing other forms of non-violent content outright (such as nudity and copyrighted material).
  • In its statement, Google reiterates its desire to strike “the right balance” between free expression on one hand, and removing terrorist and extremist content on the other. However, it is well-known that Google removes all types of content—including content that constitutes protected speech—and in particular, pornography and copyrighted material from its platforms. Given this fact, Google should fully explain its process for removing other forms of content and speech from its platforms (e.g. pornography and copyrighted material) so that the public can clearly understand Google's content detection and removal capabilities. Such an explanation would also provide a basis of comparison to measure and evaluate Google's announced efforts for detection and removal with respect to extremism. For example, what percentage of reported pornography and copyright violations is removed from Google platforms? How much of this content is identified using human detection, artificial intelligence, and image-matching technology, respectively? How much flagged pornography and copyrighted material is removed within 24 hours?
  • ​According to its statement, Google is funding more than a hundred NGOs to serve as Trusted Flaggers of terrorist content. If Google, a company with a market cap of more than $600 billion, is going to continue to outsource its efforts to curtail extremism on its platforms then it should be transparent about its relationship with civil society actors and NGOs. Civil society cannot effectively hold private sector actors to account if they are beholden to, or restrained by, those same actors. Google should therefore explain the total grant amounts provided and pledged to NGOs in its Trusted Flagger program and also explain the terms of any grant agreements to assure the public that it is not restraining, dictating, or otherwise unduly distorting the mission or essential objectivity of those organizations.  

To read CEP’s complete analysis of Google’s June 18 blog post, click here.

To view the report, Anwar al-Awlaki’s Ties to Extremists, click here.

To view the report, Anwar al-Awlaki on YouTube, click here.

To view the report, Anwar al-Awlaki: Tracking Google’s Counter-Narrative Program, click here

 

Daily Dose

Extremists: Their Words. Their Actions.

Fact:

On October 7, 2023, Hamas invaded southern Israel where, in the space of eight hours, hundreds of armed terrorists perpetrated mass crimes of brutality, rape, and torture against men, women and children. In the biggest attack on Jewish life in a single day since the Holocaust, 1,200 were killed, and 251 were taken hostage into Gaza—where 101 remain. One year on, antisemitic incidents have increased by record numbers. 

View Archive