Meta announced the termination of its global fact-checking program, transitioning to a community-driven “community notes” system similar to X (formerly Twitter), NBC news reports.
The shift marks a major overhaul of content moderation policies across Facebook, Instagram, and Threads, emphasizing free speech and simplifying rules. The changes will begin in the U.S., with global expansion expected in the coming months.
Under the existing fact-checking program, Meta partnered with over 90 independent organizations worldwide, including AFP and Soch Fact Check in Pakistan, to review flagged content in more than 60 languages.
Fact-checkers rated content as “False,” “Altered,” “Partly False,” or “True,” among others, to curb the spread of misinformation. The ratings affected content visibility, demoting flagged posts in user feeds. However, content removal decisions rested solely with Meta.
Meta’s CEO Mark Zuckerberg criticized the system, stating, “Fact-checkers have destroyed more trust than they’ve created, especially in the U.S.”
He argued that the program had become overly politicized, alienating users and undermining its credibility. Instead, the “community notes” system will enable users to collaboratively add context to posts, fostering a more open and decentralized approach.
Meta’s fact-checking partnerships, including those in Pakistan, were integral to combating misinformation about public health, elections, and global crises.
However, the company faced criticism for perceived biases, particularly in politically charged regions. The new model aims to reduce dependence on centralized entities and empower users to self-moderate.
In addition to ending fact-checking, Meta announced a relocation of its trust and safety teams from California to Texas. Zuckerberg highlighted this move as an effort to rebuild trust in regions with differing political and cultural dynamics.
He also confirmed a return of more political content to feeds, reversing policies implemented after the 2020 U.S. election that reduced such visibility.
While Meta will continue aggressive moderation of high-risk content, such as terrorism, drugs, and child exploitation, it plans to scale back automated content removal for less severe violations.
Zuckerberg acknowledged this trade-off, stating, “We’re going to catch less bad stuff, but we’ll also reduce the number of innocent posts wrongly taken down.”
The changes follow broader industry shifts, with platforms like X also favoring user-driven moderation. Meta’s move has sparked debate over the efficacy and fairness of decentralized systems, particularly in regions where misinformation and propaganda thrive.
Despite these concerns, Zuckerberg reiterated the company’s commitment to promoting free expression and reducing errors in content moderation.