On Monday, April 23, 2018––a day before G7 security ministers pressed tech companies to do more to combat the spread of extremism online––Facebook and YouTube both released updates on their progress in removing extremist content from their platforms. This long overdue move follows the Counter Extremism Project's (CEP) longtime calls for increased transparency and metrics reporting in tech’s counter-extremism efforts. However, while progress reports are a step in the right direction, gaps continue to remain in Facebook and YouTube’s approaches to combating extremism.
Gaps in content removal: In its statement, Facebook claims to have removed 1.9 million pieces of ISIS and al-Qaeda content from its platform in the first quarter of 2018, using technology that specifically focuses on “ISIS, al-Qaeda, and their affiliates.” Facebook CEO Mark Zuckerberg also recently touted before the U.S. Congress that the company removes 99 percent of ISIS and al-Qaeda content from its platform. If that figure is accurate, and 99 percent of ISIS and al-Qaeda content uploaded to Facebook in the first quarter of 2018 equates to 1.9 million pieces, the remaining 1 percent would equate to approximately 19,000 pieces of ISIS and al-Qaeda content from that timeframe left on the platform. That is far too high a figure when we know that content is constantly downloaded, shared, and re-uploaded. When violence and terrorism are possible results, we can only accept 100 percent removal rates. Furthermore, there is no mention of Facebook’s progress in removing extremist and terrorist content from other groups. ISIS and al-Qaeda may be––as Facebook argues––“the groups that currently pose the broadest global threat,” but they are only two of many groups that use online platforms to promote hateful ideologies. For example, CEP found multiple Hezbollah pages or profiles live on Facebook on April 9, 2018, and Facebook itself acknowledges the misuse of the Internet by white supremacists.
YouTube claims to have removed 490,667 pieces of content that promoted terrorism between October and December 2017, as well as additional content that was violent, hateful, or depicted harmful and dangerous acts. YouTube also claims that more than half of the videos removed for violent extremism have fewer than 10 views. But these figures should not distract from the fact that such videos remain on the platform. During the month of April 2018, CEP still found official content from ISIS, al-Qaeda, and other Islamic extremist groups live on the platform––including several pieces of content with more than 10 views. Moreover, YouTube is not the only Google-owned platform used for the propagation of extremist content. Extremists also use platforms such as Google Drive and Google Photos to upload propaganda videos, then distribute their links on other communication platforms such as Telegram. For example, on April 24, 2018, CEP identified two pieces of ISIS propaganda uploaded to Google Drive and four ISIS videos uploaded to Google Photos. Not only does Google need to improve its removal rates of extremist content on YouTube, but it also needs to apply the same policies and practices across all of its platforms.
Gaps in technology: Both Facebook and YouTube also stressed the important role that Artificial Intelligence (AI)/machine-learning technology has served in detecting extremist content on their platforms. Facebook claimed that 99 percent of the ISIS and al-Qaeda content found in the first quarter of 2018 was “not user reported,” and in “most cases,” was found due to technological advances. YouTube claimed that 6.7 of 8 million videos removed between October and December 2017 were flagged for review by machines rather than humans. While technology is undoubtedly beneficial, Dr. Hany Farid, the world’s leading digital forensics expert and CEP's senior advisor, has warned against an over-reliance on technology, calling Zuckerberg’s view of AI “overly optimistic.” According to Dr. Farid, not only are advances in AI unlikely to continue at the same pace in the future, but AI systems are also “being used by adversaries to circumvent detection.” The constant discovery and resurfacing of extremist content on online platforms further casts doubt on AI’s effectiveness. It appears that tech firms are relying on spinning the public and policymakers about the promises of AI without seriously discussing the technology’s drawbacks. AI may be an asset, but it is not the panacea for online extremism.
Gaps in transparency and accountability: Facebook and YouTube have made positive changes, but given tech companies’ track record of apologies and reactionary policy changes, it is difficult to be fully confident that they are doing everything in their power to combat extremism and terrorism on their platforms. Furthermore, it is clear that tech giants only act when legislative and regulatory agencies as well as paid advertisers begin voicing their concerns. Rather than working to ensure that the best possible preventative measures are in place, it seems as though tech firms only make the bare minimum of improvements when confronted with the threat of government regulation or loss of advertising revenue. In the meantime, individuals can continue to access extremist content on platforms such as Facebook and YouTube, and they may be inspired to carry out violent terrorist attacks as a result.
Releasing progress reports is a good step, but tech companies have yet to release the details of their counter-extremism efforts for third-party review. Facebook’s statement promises that “we work every day to get better.” But without a mechanism to truly evaluate tech’s progress, the public can only hope that these tech companies are taking the problem as seriously as they claim before the next terrorist attack.