Action Comes as Facebook is Also Facing Regulatory Action in Germany & U.S.
A U.K. Parliamentary committee last week released a report highlighting Facebook’s ongoing inability to prevent bad actors from misusing the popular platform, showing that self-regulation by the company in the face of scandal after scandal has failed catastrophically. Finding that Facebook only acts “when serious breaches become public,” the report recommends the company be made more liable for harmful content on its platforms. Facebook has rejected the committee report’s allegations, instead making vague claims that they have already made “substantial changes” to their policies and have worked to remove “bad content” from the platform, in part due to the use of artificial intelligence.
“The House of Commons’ Digital, Culture, Media and Sport Committee should be commended for its commitment to uncovering the extent to which Facebook’s behavior and business model depends on exploiting its users’ privacy and sense of decency,” said Counter Extremism Project (CEP) Executive Director David Ibsen. “This is another in a long line of investigations that very clearly proves that self-regulation has failed. Action from public officials is very much needed to reign in these companies and protect citizens from the tech industry’s harmful effects. PR strategies centered on reactionary policy changes and meaningless claims about artificial intelligence are not going to cut it anymore.”
Last year, CEP compiled a comprehensive timeline and tracker of Facebook’s reactionary policy changes. The tracker shows that Facebook is repeating its outright rejection of any wrongdoing whatsoever. In many of these cases, the company scrambled to save face and superficially change some policies soon after it rejected claims of wrongdoing.
- In 2018, Channel 4 Dispatches revealed that Facebook was allowing right-wing content to remain on the website in violation of its rules because it generated “a lot of revenue for Facebook.” Facebook claimed it had already been removing content that violated its rules “no matter who posts it,” but still had to review and update its trainings for content moderators following mounting public pressure.
- In 2016, U.K. and European lawmakers flagged that Facebook had become a “recruiting platform for terrorism.” Facebook claimed it had already been dealing “swiftly and robustly with reports of terrorism-related content,” but still had to adopt a shared industry database with Microsoft, Twitter and YouTube to slow the spread of terror online.
- In 2011, the Federal Trade Commission cited Facebook for telling users they could keep their information private, and then turning around allowing their information to be shared and made public, including to advertisers. Facebook tried to downplay the discovery by claiming it had already fixed many of the issues, but ultimately had to settle and agree to a number of FTC-imposed best practices moving forward.
In addition to the U.K., Facebook is facing pending regulatory action or oversight in Germany and the United States. In Germany, the country’s anti-trust office ruled that Facebook must receive its users’ explicit consent before building a user profile out of data it extracts from Facebook and non-Facebook platforms. Under its NetzDG legislation, Germany also allows for fines against online platforms that repeatedly do not delete prohibited content within 24 hours. On February 14, it was reported that Facebook was negotiating with the U.S. Federal Trade Commission on a “record, multibillion-dollar fine for the company’s privacy lapses.”
To access CEP’s timeline and tracker of Facebook’s reactionary policy changes, please click here.