During the summer of 2017, YouTube launched several initiatives relating to terrorist content on its platform, including the Global Internet Forum to Counter Terrorism (GIFCT)––a partnership with other tech companies aimed at combating extremist content online. The company also announced its Redirect Method––a program aiming to redirect users searching for violent extremist content to counter-narrative videos. In addition, the Google-owned video sharing platform pledged to improve its machine-learning technology to better detect terrorist content, increase its number of human content reviewers, and take a “tougher stance” on videos that did not clearly violate its policies. Unfortunately, all of these well-intentioned initiatives came at least one month after suicide bomber Salman Abedi killed 22 people in Manchester, England, with an explosive device that he had assembled using ISIS bomb-making tutorial videos on YouTube. (YouTube Official Blog, YouTube Official Blog, Google Blog, Times)
For over a decade, Google has faced criticism for the misuse of its platforms, especially YouTube, on issues ranging from the publication of inappropriate content to copyright infringement. Rather than taking preventative measures, Google has jumped to make policy changes after considerable damage has already been done. The Counter Extremism Project (CEP) has documented instances in which Google/YouTube has made express policy changes following public accusations, a scandal, or pressure from lawmakers. While one would hope that Google is continuously working to improve security on YouTube and its other platforms, there is no excuse as to why so many policy changes have been reactive, and it raises the question as to what other scandals are in the making due to still-undiscovered lapses in Google’s current policy.
February 2006: NBC asks YouTube to remove a Saturday Night Live skit uploaded to its website, citing concerns about copyright infringement. (CNET)
Subsequent Policy Change(s):
- March 2006: YouTube imposes a 10-minute length limit to video uploads in an attempt to prevent the upload of copyrighted videos to the site, such as TV shows and movies which are likely to be over 10 minutes in duration. (YouTube Official Blog)
- April 2006: YouTube establishes a “Help Center” providing answers to questions about copyright policy, among other topics. (YouTube Official Blog)
October 2006: YouTube receives criticism from creators who have had their videos removed after they were incorrectly flagged. (Source: YouTube Official Blog)
Subsequent Policy Change(s):
- October 2006: YouTube publishes new Community Guidelines that clarify what content is allowed in videos. (YouTube Official Blog)
- November 2007: YouTube clarifies the existing options and introduces new options for its video flagging system in an attempt to clarify what content is permitted on the platform and what should be flagged. (YouTube Official Blog)
March 2007: Viacom sues YouTube for “massive intentional copyright infringement.” (Reuters)
Subsequent Policy Change(s):
- October 2007: YouTube launches a feature called “Content ID,” which allows copyright owners to flag instances of copyright infringement. (YouTube Official Blog)
2008: YouTube receives intense criticism in the U.K. and U.S. about violent content––especially perpetrated by youth––on its platform. In February, four New York high school students were charged after posting a video to YouTube that showed them attacking another student. In July, a culture, media, and sports committee in the U.K. released a report about harmful content on the site, including a video of a gang rape. (Guardian, Guardian, New York Times, CRC Health)
Subsequent Policy Change(s):
- September 2008: YouTube updates its Community Guidelines to clarify its policies on violence and hate speech. (YouTube Official Blog)
- October 2008: YouTube changes its flagging menu to include the category “youth violence” instead of “minors fighting” in order to account for a greater variety of violent youth content on its site. (YouTube Official Blog)
- December 2008: YouTube introduces a new Abuse and Safety Center which provides safety tips and resources from experts. (YouTube Official Blog)
2012: YouTube faces criticism over its Content ID feature, which critics argue favor the interests of rights holders over users. (The Verge, The Verge, Wired)
Subsequent Policy Change(s):
- October 2012: YouTube improves the algorithms for its Content ID feature and introduces a new appeals process. (YouTube Creators Blog)
2015–2016: YouTube Kids, an app intended to allow children to use YouTube free of inappropriate or adult content, comes under fire after consumer groups find inappropriate content on it, such as videos that reference sex, drug use, and pedophilia. Consumer groups also critique its policy of targeting advertisements toward children. (TechCrunch, TechCrunch, CNN)
Subsequent Policy Change(s):
2016: Throughout the year, U.K. and European lawmakers express concern that social media platforms have become a “vehicle of choice” for extremists to recruit and radicalize. Several governments threaten legislative action. (Telegraph, Reuters, Wired)
Subsequent Policy Change(s):
- December 2016: YouTube, Facebook, Microsoft, and Twitter launch a shared industry database of “hashes”––digital “fingerprints” of extremist imagery––in an effort to curb the spread of terrorist content online. (Google Blog)
February 2016: YouTube receives further criticism from its users about its Content ID feature, which critics continue to argue unfairly favors rights holders over content creators. (The Verge)
Subsequent Policy Change(s):
- April 2016: YouTube introduces a new policy for its Content ID feature that allows videos to still earn revenue while a claim is being disputed. (YouTube Creators Blog)
March 2017: YouTube faces criticism from LGBT creators, which claims that its Restricted Mode feature unfairly hides their videos. (Guardian)
Subsequent Policy Change(s):
- March 2017: YouTube “better” trains its automated system so that its Restricted Mode allows the previously restricted LGBT videos that were previously hidden. (YouTube Creators Blog)
- April 2017: YouTube provides guidelines as to what videos will be hidden in Restricted Mode. (YouTube Creators Blog)
March 2017: A Times of London investigation finds advertisements of reputable brands appearing alongside hateful and extremist videos. (Times)
Subsequent Policy Change(s):
- March 2017: YouTube announces that it will take a tougher stance on hate speech and strengthen advertiser controls. (YouTube Creators Blog)
- June 2017: YouTube announces new guidelines about content eligible for ads. (Source: YouTube Creators Blog)
May-June 2017: In May, The Times of London finds several bomb-making videos on Facebook and YouTube days after Salman Abedi detonates a suicide bomb in Manchester, England, that he reportedly built by watching instructional videos online. U.K. and European lawmakers also continue to increase pressure against tech companies, calling for new laws to punish companies that continue to host extremist material on their platforms. The U.K. Home Affairs Committee publishes a report saying that tech companies are “shamefully far” from taking action to tackle hateful content, and U.K. Prime Minister Theresa May calls on fellow G7 members to pressure tech companies to do more to remove extremist material. (London Evening Standard, Times, Times, CNBC, U.K. Home Affairs Committee, Guardian)
Subsequent Policy Change(s):
- June 2017: YouTube “increases its use of technology” to identify extremist videos, increases the number of people in its Trusted Flagger program, and takes a “tougher stance” on videos that do not clearly violate its policies. (Google Blog)
- June 2017: YouTube, Facebook, Microsoft, and Twitter launch the Global Internet Forum to Counter Terrorism (GIFCT), a partnership aimed at combating extremist content online. (YouTube Official Blog)
- July 2017: YouTube launches its Redirect Method program, which aims to redirect users searching for violent extremist content to counter-narrative videos. (YouTube Official Blog)
October 2017: YouTube receives criticism over its allowance of the Russian propaganda media outlet RT to expand on its site. (New York Times, Wall Street Journal)
Subsequent Policy Change(s):
- February 2018: YouTube adds notices below videos uploaded by news broadcasters that receive government funding in an attempt to improve transparency. (YouTube Official Blog)
November 2017: Buzzfeed News finds exploitative YouTube videos of children in compromising, predatory, or creepy situations. YouTube is also found to be autofilling search results with pedophiliac terms. (BuzzFeed, The Verge)
Subsequent Policy Change(s):
- November 2017: YouTube announces that it has expanded its enforcement guidelines around the removal of content featuring minors. (YouTube Official Blog)
- December 2017: YouTube pledges to add additional human content reviewers, deploy machine-learning technology in areas including child safety and hate speech, release a regular report on flagging and video removal data, and implement stricter controls on advertising. (YouTube Official Blog)
2017–2018: YouTube Kids receives additional criticism for recommending unsuitable and sometimes disturbing videos to children, including conspiracy theory videos and clips of cartoon characters in inappropriate situations. (Source: New York Times, Business Insider)
Subsequent Policy Change(s):
January-February 2018: YouTube creator Logan Paul draws backlash over content uploaded to YouTube, including video footage of a suicide victim and animal abuse. (Vox, The Verge)
Subsequent Policy Change(s):
- February 2018: YouTube suspends ads on Logan Paul’s videos and proposes additional policies concerning ads and video recommendations. (The Verge, YouTube Creators Blog)
February 2019: Nestle SA, Walt Disney Co., Epic Games, McDonald’s, and other major companies pull advertising spending from YouTube after video blogger Matt Watson posted a clip detailing how YouTube comments were used to “facilitate pedophiles’ ability to connect with each other.” Users posted predatory comments on videos in which young girls were posing in front of a mirror, doing gymnastics, or other activities. Watson also discovered that if users clicked one of these videos, YouTube’s algorithms would recommend similar content. (CNN, Bloomberg, AdWeek)
Subsequent Policy Change(s):
- February 2019: YouTube holds a conference call with representatives from major ad agencies as well as several other advertisers, and distributes a memo on “Expanding YouTube’s Efforts on Child Safety” to participants. YouTube claims to have removed more than 400 channels based on their comments, and disabled comments on tens of millions of videos featuring minors. (AdWeek)
April 2020: The EU Parliament drafts a report, calling on the European Commission to introduce a “Know Your Business Customer” principle to the Digital Services Act. The proposed legislation will set rules and regulations on how tech companies like Google and Facebook police illegal content online. In the draft report published mid-April, Members of European Parliament argue that “Services providers should verify the identity of their business partners, including their company registration number or any equivalent means of identification.” The proposed measure is meant to combat the proliferation of ads containing disinformation. (Politico Europe, CNBC, Google)
Subsequent Policy Change(s):
- April 2020: A week after the publication of the EU Parliament’s draft report, Google announces that it will require all advertisers to verify their identity. The tech giant says users will start seeing disclosures on ads when clicking, “Why this ad?” starting in summer 2020. The company will reportedly begin verifying advertisers in the United States and will expand globally over several years. (Politico Europe, CNBC, Google)
August 2021: On August 15, the Taliban takeover the Afghan government and ramp up its presence on social media platforms. The events reportedly lead to some confusion among tech companies on how to moderate Taliban content. The next day, YouTube declined to comment on the matter to Reuters. (New York Times, Recode, Reuters)
Subsequent Policy Change(s):
- August 2021: By August 17, YouTube claims that it removes Taliban content per U.S. sanctions law. A company spokesperson stated in an email to Recode, “[I]f we find an account believed to be owned and operated by the Afghan Taliban, we terminate it.” A New York Times reporter also sends YouTube and Facebook accounts belonging to a Taliban spokesman, asking the companies to comment. YouTube fails to respond, but the accounts—which were created in September 2020—are removed. (Recode, Twitter, New York Times)