Facebook’s “Redirect Initiative” Puts Onus On Users To Identify Extremism

July 13, 2021 CEP Staff

In early July, Facebook began testing a pilot program on its platform asking some users if they had been “exposed to harmful extremist content” or “concerned that someone you know is becoming an extremist.” Called the “Redirect Initiative,” the pop-up alerts take users to a support page if they choose to proceed. The program marks Facebook’s latest effort to fend off more than a decade of criticism for the misuse of its platform, particularly after the perpetrator of the Christchurch mosque shootings livestreamed his attack on Facebook Live and when far-right groups were found to have promoted violence during the 2020 U.S. presidential election using Facebook groups and pages.

“The Redirect Initiative is Facebook’s latest half measure to tackle extremism on its platform in which users are asked to do the policing instead of the companies themselves,” said Counter Extremism Project (CEP) Executive Director David Ibsen. “By putting the onus on users, Facebook is deflecting from its responsibility to be more proactive about removing offending content. Moreover, Facebook’s initiative ignores a crucial root cause for the spread of extremist content—proprietary algorithms that have a perverse incentive amplify divisive and controversial content to keep users on their sites and generate more revenue for the company.”

CEP Senior Advisor Alexander Ritzmann in June published a policy brief on the European Union’s Digital Services Act (DSA), titled “Notice And (NO) Action”: Lessons (Not) Learned From Testing The Content Moderation Systems Of Very Large Social Media Platforms, which found that “notice and action” systems do not work as intended. “Notice and action” relies on users to first notify the platform of illegal content, and the platform will then determine if it should be removed. Based on six independent monitoring reports, Ritzmann found that the overall average takedown rate of illegal content through user notices by large social mediate platforms was a mere 42 percent. Despite these alarming findings, however, the draft DSA (Article 14) favors the “notice and action” mechanism as the main content moderation system. This system unrealistically expects the 400,000,000 Internet users in the EU first to be exposed to illegal and possibly harmful content and then to notify the platforms about it.

In a recent webinar, CEP Senior Advisor Dr. Hany Farid highlighted the consequences with the promotion of misinformation and divisive content on the Internet through algorithmic amplification by major tech platforms. Regarding the role algorithmic amplification plays in the proliferation of harmful content, Dr. Farid stated, “Algorithmic amplification is the root cause of the unprecedented dissemination of hate speech, misinformation, conspiracy theories, and harmful content online. Platforms have learned that divisive content attracts the highest number of users and as such, the real power lies with these recommendation algorithms.”

To read CEP’s report, “Notice And (NO) Action”: Lessons (Not) Learned From Testing The Content Moderation Systems Of Very Large Social Media Platforms, please click here.

To watch a recording of CEP’s webinar, How Algorithmic Amplification Pushes Users Toward Divisive Content, please click here.

Daily Dose

Extremists: Their Words. Their Actions.

Fact:

On October 7, 2023, Hamas invaded southern Israel where, in the space of eight hours, hundreds of armed terrorists perpetrated mass crimes of brutality, rape, and torture against men, women and children. In the biggest attack on Jewish life in a single day since the Holocaust, 1,200 were killed, and 251 were taken hostage into Gaza—where 101 remain. One year on, antisemitic incidents have increased by record numbers. 

View Archive