Meta Ends Fact Checking Programs

What Does Meta Mean For Online Advertising? | Bamboo Nine

By Skye Milburn

     This last Tuesday, on January 7th, 2025, Meta’s CEO and third richest man in the world (according to Forbes) Mark Zuckerberg revealed significant alterations to the firm’s moderation practices and policies. 

He stated that he wanted an evolving political and social environment committed to upholding free speech. Zuckerburg announced that Meta will discontinue its fact-checking programs with reliable partners and will implement a community based system based on X’s community notes.

     The company is revising its content moderation rules regarding political issues, as well as reversing changes that reduced the volume of political content in user algorithms. These changes will heavily affect Instagram and Facebook, which are some of—if not—the largest social media platforms in the world, costing them millions or even billions of users worldwide. 

     Mark Zuckerberg released a video regarding the situation: “We’re going to get back to our roots and focus on reducing mistakes, simplifying our policies and restoring free expression on our platforms,” he stated. “First, we’re going to get rid of fact checkers and replace them with community notes similar to X, starting in the U.S.”

He also claimed that the election was a major influence on why the company actually made the decision, criticizing governments and legacy media for allegedly pushing censorship more frequently. 

    He stated also that the systems the company created to supervise and moderate its platforms and algorithms were making too many mistakes, and that the new systems would continue to aggressively moderate content related to drugs, terrorism, and other serious matters. 

     Since the end of the fact-checking programs, the company has also decided to get rid of some policies around immigration, gender, and other issues, refocusing the automated moderation programs on “high severity violations,” as he called them. Zuckerberg also decided to move Facebook’s trust and safety content moderation team out of California to Texas. 

     Meta’s original fact-checking program was launched in 2016, working by running information through third party platforms certified by the International Fact-Checking Standards Network. The programs had more than 90 organizations that would fact check in over 60 languages. In the U.S., these organizations include PolitiFact and Factcheck.org.

     Meta claimed that it would be able to identify posts on these social media platforms that might be spreading misinformation based on how people responded to certain information and content, as well as how fast these posts would spread.

More independent fact-checkers would simultaneously be looking for misinformation on their own, further increasing their effectiveness. These posts with possible misinformation would be put in lower feeds while they were looked over and reviewed. 

     The aftermath of the removal of these fact-checking programs could be brutal for Meta. While Zuckerberg believes the removal of censorship will overall benefit his digital media platforms, and that fact-checking his sites led to more forceful censorship, in reality the two don’t have as much to do with each other as he says. 

This decision to actually penalize or remove posts is purely up to Meta and not fact-checkers. Fact-checkers dont have the ability to censor posts or decide what shows up on users’ feeds. All they do is provide accurate information to ensure quality content for users. 

Censorship came from Meta itself, and getting rid of fact-checking programs to reduce censorship doesn’t seem to make a lot of sense to anyone. 

Leave a comment