Tech & Terrorism: Tech Companies That Algorithmically Amplify Terrorist Content Should Not Receive Section 230 Immunity

(New York, N.Y.) – As calls for reform to Section 230 of the Communications Decency Act continue, lawmakers in Congress have begun shifting their approach from attempting major reform to tailoring it to individual types of content. For instance, one proposal would amend Section 230 to remove liability immunity for a platform if its algorithms amplify, recommend, or promote hateful and terrorist content.

Last June, Counter Extremism Project (CEP) Senior Advisor Dr. Hany Farid testified before Congress on the issue of content amplification. He explained that the tech industry’s business model is based on increasing engagement among its userbase and that divisive content, whether conspiratorial or terrorist in nature, does just that. “The point is not about truth or falsehood, but about algorithmic amplification. The point is that social media decides every day what is relevant by recommending it to their billions of users. The point is that social media has learned that outrageous, divisive, and conspiratorial content increases engagement … The vast majority of delivered content is actively promoted by content providers based on their algorithms that are designed in large part to maximize engagement and revenue.”

The role algorithmic amplification plays in content consumption is an issue that must be confronted. Last March, Dr. Farid co-authored a report analyzing YouTube’s policies and efforts toward curbing its algorithm’s tendency to spread conspiracy theories. After reviewing eight million recommendations over 15 months, researchers determined the progress YouTube claimed in June 2019 to have reduced the amount of time its users watched recommended videos including conspiracies by 50 percent—and in December 2019 by 70 percent—did not make the “problem of radicalization on YouTube obsolete nor fictional.” The study, A Longitudinal Analysis Of YouTube’s Promotion Of Conspiracy Videos, found that a more complete analysis of YouTube’s algorithmic recommendations showed the proportion of conspiratorial recommendations is “now only 40 percent less common than when the YouTube’s measures were first announced.”

Absent action from Congress, tech companies must do a better job at moderating their platforms. In a recent op-ed for Morning Consult, CEP Executive Director David Ibsen provided a realistic and well-defined step tech companies can take to more effectively remove extremist content and individuals from their platforms. “CEP has long argued for tech industry removal policies that are transparent and based on established standards and laws. For example, we have called for social media platforms to ban participation from U.S.-designated Foreign Terrorist Organizations (FTO) and Specially Designated Nationals (SDN). Such a commonsense approach will help ensure that the tech industry can focus on a clear and defined set of targets and be held accountable when companies fail to take effective and permanent action against actual extremists with a history of advocating for violence or carrying out terrorist attacks.”

Daily Dose

Extremists: Their Words. Their Actions.

Fact:

On October 7, 2023, Hamas invaded southern Israel where, in the space of eight hours, hundreds of armed terrorists perpetrated mass crimes of brutality, rape, and torture against men, women and children. In the biggest attack on Jewish life in a single day since the Holocaust, 1,200 were killed, and 251 were taken hostage into Gaza—where 101 remain. One year on, antisemitic incidents have increased by record numbers. 

View Archive