In a June 18 blog post, Google discussed its plans to tackle terrorist videos on its platforms—in particular YouTube. Google—which has a market cap of more than $600 billion—pledged to scale up its technological and human capacity to identify content that violates company policies; issue warnings before inflammatory content in lieu of removing it; and expand its counter-radicalization efforts through public- and private-sector partnerships and targeted advertising tools.
The announcement from Google comes at a time when tech companies are under increasing scrutiny from governments concerned by the weaponization of the Internet by terrorists. YouTube, in particular, is also dealing with a revolt by advertisers whose ads were played before extremist and other objectionable material. While action taken to degrade the ability of terrorists to misuse popular Internet platforms such as YouTube should be commended, Google’s blog post lacks specific details and raises addition concerns about (1) the actual scope of extremist and terrorist content on its platforms and (2) the efficacy of their proposed solutions. Google owes the public and U.S. and European policymakers clear, comprehensive explanations with measurable solutions.
CEP has posed a series of questions and requests for additional information to better understand Google’s progress in detecting and removing terrorist and extremist content:
“First, we are increasing our use of technology to help identify extremist and terrorism-related videos... We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months.”
How does Google/YouTube define terrorism?
- How many “terrorism-related” pieces of content has Google/YouTube removed in 2015, 2016, and 2017? Of these, how many were found by Google/YouTube? How many were found by other parties?
- How many total takedown notices concerning terrorism did Google/YouTube receive in 2015, 2016, and 2017? Of these notices, how many resulted in removal of content or accounts for terrorism?
- How many did Google/YouTube report to government/law enforcement?
- What specific action has Google/YouTube taken regarding online lectures by well-known propagandist and operative for al-Qaeda in the Arabian Peninsula Anwar al-Awlaki? Dozens of U.S. and European terrorists have been radicalized to violence by Awlaki. His online lectures, in particular, have continued to inspire Westerners to terror.
“Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal… We will now devote more engineering resources to apply our most advanced machine learning research to train new ‘content classifiers’ to help us more quickly identify and remove extremist and terrorism-related content.”
“We will expand this programme [YouTube’s Trusted Flagger programme] by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants… We will also expand our work with counter-extremist groups to help identify content that may be being used to radicalise and recruit extremists.”
- How much engineering resources are currently dedicated to remove extremist and terrorism-related content? What additional engineering resources will now be devoted for this purpose, specifically? What percentage of extremist and terrorist content is being detected by machine learning, image-matching technology, and human detection, respectively?
- What is the error rate for each technology being deployed?
“We will expand this programme [YouTube’s Trusted Flagger programme] by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants… We will also expand our work with counter-extremist groups to help identify content that may be being used to radicalise and recruit extremists.”
- What criteria did Google/YouTube use to vet and add to its Trusted Flagger program? Does Google/YouTube provide operational grants to all 50 expert NGOs and 63 organizations that are part of the program? Please provide a complete list of all participants in Google/YouTube’s Trusted Flagger program.
- If Google/YouTube is going to continue to outsource efforts to curtail and remove extremist and terrorist content from its platforms, it should be transparent about its relationship with NGOs and other civil society actors. Civil society cannot effectively hold private sector actors to account if they are beholden to, or restrained by, those same actors. What is the total amount of operational grants provided by Google to the NGOs and civil society organizations participating in YouTube’s Trusted Flagger program? What are the terms of the grant agreement YouTube signs with Trusted Flagger program participants? Are the funds earmarked for specific activities such as staff and terrorism and extremism experts who can report content on YouTube?
- Other than the Trusted Flagger program, how much of Google/YouTube’s overall budget is dedicated to content moderation and review staff? How many staff are active presently and what is their average wage/salary?
“Third, we will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content; in future these will appear behind a warning and will not be monetised, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find.”
- How will Google differentiate between content that promotes terrorism (and should be removed) and so-called “inflammatory” content, which will remain on its platform albeit affixed with warning labels? Does Google/YouTube use a similar approach to other types of content on its platform, for example copyrighted content or pornography? Google/YouTube should clarify if this approach (i.e. affixing warning labels to certain inflammatory content) has a precedent with other forms of speech such as nudity and copyrighted material. If not, Google should explain why it is going to such great lengths to keep terrorist and extremist content on its platforms while removing other forms of non-violent content (such as nudity and copyrighted material) outright.
- What are the criteria and the process by which Google/YouTube selects videos to shield behind a warning? How will Google/YouTube keep the public informed about its determination to not remove such content?
- How does this new policy affect non-explicitly violent terrorist and extremist videos that have proven connections to violent actors and incidents (e.g. the radicalizing sermons of Anwar al-Awlaki and Ahmad Musa Jibril as well as bomb-making instructional videos)? Has Google/YouTube decided if videos of Awlaki and Jibril will be removed or simply feature a warning label?
“Building on our successful Creators for Change programme, promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the ‘redirect method’ more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining.”
- If Google/YouTube is involved in the creation of original content for promotion and dissemination on its platforms, can it credibly be characterized as a neutral third-party Internet actor?
- Can Google/YouTube elaborate on the purported success of these activities? How many individuals has Google/YouTube successfully turned away from extremism/terrorism through the Creators for Change program and “redirect method”?
- How exactly has Google/YouTube considered countering the radicalizing messages of Anwar al-Awlaki? Is the “redirect method” deployed against Awlaki content? What specific “anti-terrorist videos” are deployed in numbers sufficient to counter Awlaki’s messages?
We have also recently committed to working with industry colleagues — including Facebook, Microsoft, and Twitter — to establish an international forum to share and develop technology and support smaller companies, and accelerate our joint efforts to tackle terrorism online.”
- What specific technologies has Google/YouTube developed and shared with the international forum of industry colleagues? Is there an agreement among industry about how new technologies will be deployed?
- Is Google/YouTube sharing its database of known extremist and terrorist content with others in the forum? If so, how many pieces of content has Google/YouTube shared?
- Moreover, how much content has Google/YouTube contributed to the so-called “hashing coalition” announced in December 2016? Is there agreement that all content in the database will be removed across industry platforms? How much content has Google/YouTube removed as a result of the database?
“We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.”
- It is well-known that Google/YouTube removes all types of content—including content that constitutes protected speech—and in particular pornography and copyrighted material from its platforms, at any time and without public discussion. Give this fact, Google should fully explain its process for removing other forms of content and speech from its platforms (e.g. pornography and copyrighted material) so that the public can clearly understand Google’s content detection and removal capabilities. Such an explanation would also provide a basis of comparison to measure and evaluate Google’s announced efforts for detection and removal with respect to extremism.
- What percentage of reported pornography and copyright violations is removed from Google/YouTube? How much is identified using human detection, artificial intelligence, and image-matching technology, respectively? How much of flagged copyrighted material and pornography is removed within 24 hours?
- How does Google define “free expression”?
To read Google 's June 18 blog post in full, click here.