Big Tech Companies Cut Ethics And Safety Staff Following Section 230 Ruling

June 06, 2023 CEP Staff

In the wake of twin U.S. Supreme Court rulings that largely maintained the liability shield in Section 230 of the Communications Decency Act, major tech companies are sharply reducing their workforce responsible for content moderation. Citing their commitment to “do more with less,” Twitter executives announced that they have laid off 15 percent of the company’s trust and safety staff, while Meta eliminated 200 content moderation positions and cut at least 100 similar posts on Instagram's integrity and responsibility staff. Meanwhile, Google parent company Alphabet cut one-third of its staff dedicated to identifying misinformation and radicalization on the platform.

The notable downsizing of trust and safety personnel at leading tech companies demonstrates the industry’s unwillingness to effectively curb extremist and terrorist content, which continues to proliferate.

Recently, the Counter Extremism Project (CEP) identified numerous accounts propagating extremist content across these platforms. In May, researchers located three Twitter accounts disseminating pro-ISIS propaganda. On Meta-owned Instagram, two Instagram pages, collectively reaching thousands of people, promoted a "European Fight Night" for an extreme-right German group. YouTube hosted a video interview of Australian neo-Nazi Thomas Sewell making antisemitic statements and advocating for white supremacy.

Unfortunately, the tech industry is actively deprioritizing online safety given the Supreme Court’s latest rulings. These companies have a business incentive to increase engagement on their platforms—including by pushing terrorist content—and U.S. lawmakers must act in order to encourage better behavior from tech.

In 2021, CEP and CEP senior advisor Dr. Hany Farid supported the introduction of the Protecting Americans from Dangerous Algorithms Act (PADAA), which would narrowly amend Section 230 and would lift the liability shield when an online platform knowingly or recklessly deploys recommendation algorithms to promote content that, among other things, is relevant to cases involving international terrorism.

Last month, Dr. Farid observed that “momentum is building for legislative reform,” as evidenced by the bipartisan support for legislation addressing children’s safety online, including one bill that would specifically remove “blanket immunity for violations of laws related to online child sexual abuse material.”

Reform, however, is far from a certainty. Companies such as Alphabet, the parent company of Google and YouTube, Meta, and Twitter, have expended nearly $100 million collectively on lobbying Congress since 2020, including on efforts to defeat legislation that would reduce Section 230 immunities for online platforms. Clearly there is an aggressive and concerted effort to maintain the status quo. Legislators should resist tech’s efforts to obscure the ongoing spread of extremist and terrorist content at the detriment of public safety and security.

Daily Dose

Extremists: Their Words. Their Actions.

Fact:

On October 7, 2023, Hamas invaded southern Israel where, in the space of eight hours, hundreds of armed terrorists perpetrated mass crimes of brutality, rape, and torture against men, women and children. In the biggest attack on Jewish life in a single day since the Holocaust, 1,200 were killed, and 251 were taken hostage into Gaza—where 101 remain. One year on, antisemitic incidents have increased by record numbers. 

View Archive