Tech Companies Fail to Curb Online Abuses

October 03, 2019 CEP Staff
Dr. Hany Farid: More Must Be Done to Stop Child Exploitation, Deepfakes

Child exploitation content on the Internet has dramatically increased, and current efforts to prevent and remove it by tech companies and law enforcement are not enough. In a recent New York Times article, Counter Extremism Project (CEP) Senior Advisor Dr. Hany Farid, an expert in digital forensics, criticized tech companies for their reluctance to “dig too deeply” into ongoing abuses on their platforms. Farid stated, “The companies knew the house was full of roaches, and they were scared to turn the lights on … And then when they did turn the lights on, it was worse than they thought.”

Farid, a professor in the Department of Electrical Engineering and Computer Science and in the School of Information at the University of California, Berkeley, has been a persistent critic of the lackluster commitment from technology companies to combat the spread of child exploitation images and videos and online extremism. He is also a leading expert on efforts to detect and prevent the proliferation of deepfakes and other forms of online disinformation.

Farid was part of the team that developed PhotoDNA, a free technology that can detect known images of child exploitation using robust hashing algorithms. Using similar robust hashing techniques, in 2016 Farid and CEP developed and announced eGLYPH, which can detect not only the worst of the worst extremist and terrorist images, but video and audio files as well. In May, Farid discussed the evolution of PhotoDNA into eGLYPH, and its potential for moderating extremist content online and other online abuses. He noted that the technology was developed and made freely available, to counter the contention from tech companies that nothing could be done to prevent the proliferation of extremist content. Farid urged everyone from academics to legislators to insist on accountability from tech companies.  

Farid has also stated that disinformation and so-called “deepfakes” are part of the next frontier of digital warfare. Last week, he testified before the U.S. House Committee on Science, Space and Technology, discussing  deepfakes, disinformation, and the role technology companies could be playing to combat them. He stated, “Despite efforts by digital forensic researchers, no current technology exists that can contend with the vast array of different types of deepfakes at a speed and accuracy that can be deployed at internet-scale.” Farid also discussed what Congress can do to hold large tech companies accountable for deterring deepfakes and disinformation posted on their platforms.

Large technology companies, like Google and Facebook, continue to avoid accountability for false content and misuse of their respective platforms. In the final episode of a video series developed with CEP, Farid highlighted deepfakes and disinformation as being at the forefront of current and future crises of online extremism. Tech companies and media platforms must be held responsible for the extremism disaster they helped create. Additionally, the general public needs to demand that tech companies do more to fight deepfakes, disinformation, child exploitation, and extremist content online. 

Daily Dose

Extremists: Their Words. Their Actions.

Fact:

On October 7, 2023, Hamas invaded southern Israel where, in the space of eight hours, hundreds of armed terrorists perpetrated mass crimes of brutality, rape, and torture against men, women and children. In the biggest attack on Jewish life in a single day since the Holocaust, 1,200 were killed, and 251 were taken hostage into Gaza—where 101 remain. One year on, antisemitic incidents have increased by record numbers. 

View Archive