Is the Sky Falling? Evaluating Meta’s Move on Fact-Checking
By Will Conner, AEP Postdoctoral Fellow
On January 7, Mark Zuckerberg, CEO of Meta, announced major changes to the company’s content moderation policies. In a video entitled “More Speech and Fewer Mistakes,” Zuckerberg stated that Meta will no longer employ third-party fact-checkers to flag misinformation on its platforms. Instead, Meta will shift to a crowdsourced community notes model for content moderation like that of X (formerly Twitter).
This accompanies several other changes in a move that Zuckerberg pitches as a return to Meta’s “roots”—namely, “giving people a voice” and supporting “free expression.” He stated that the company “tried in good faith” to address concerns about misinformation since 2016 by instituting and strengthening its fact-checking program. But, he argues, “the fact-checkers have just been too politically biased and have destroyed more trust than they’ve created—especially in the US,” adding that “we’ve reached a point where it’s just too many mistakes and too much censorship.” (Perhaps unsurprisingly, fact checkers themselves disagree with Zuckerberg’s assessment.) Hence, the need to return to the company’s roots.
The ensuing discourse has been heated. Critics have painted a dire picture, expressing concern that these changes will accelerate the spread of harmful misinformation on Meta’s platforms, further erode trust in institutions, worsen polarization, and contribute to what some regard as our society’s already catastrophic epistemic crisis. Others are supportive. In their view, the change marks the end of a troubling censorship regime and the return of an important degree of autonomy to Meta’s users. Some on the right have even heralded the company’s decision as part of a broader post-election “vibe shift” away from what they regard as the excesses of the left. Of course, many have offered more measured takes, too, noting the risks of the decision while arguing that the shortcomings of third-party fact-checking and heavy-handed content moderation must be acknowledged, and stressing that the decision’s impact will depend on how well the crowdsourced community notes content moderation model works in practice. (On that front, here’s some promising news, but caution is advised.)
So… what should we think about the shift away from fact-checking? Some have argued that it’s part of a transparent attempt to curry favor with the incoming Trump administration. Be that as it may, the change deserves a hearing on its own terms in light of the reasons Zuckerberg provided for it. I want to think through this here. To that end, I’ll focus solely on Zuckerberg’s stated rationale for abandoning third-part fact-checking and avoid speculation about what other factors might have motivated him to make this decision.
I have two primary questions: (1) What should we make of Zuckerberg’s charge that fact-checking on Meta’s platforms has been too politically biased and undermined trust? (2) How impactful is Meta’s move away from third-party fact-checking likely to be? Will it be epistemically detrimental, or are these fears misplaced?
I’ll think through these questions in light of some recent work on misinformation and fact-checking by epistemologist Dan Williams, political scientist Joseph Uscinski, and psychologist Sacha Altay, who have raised critiques focusing on the definition and application of the term “misinformation.”
Let’s start with question (1). Critics have posed several challenges for misinformation research, many of which carry over to fact-checking. The first challenge asks: What is misinformation anyway? The definition of misinformation is hotly contested. While some define it narrowly as unambiguously false information, others include false information generally. Still others broaden the term to encompass misleading information, too—including technically true claims allegedly framed or contextualized in ways that mislead.
The failure to reach consensus on what constitutes misinformation creates serious practical problems. For one, it introduces inconsistency in what gets labeled as misinformation, which could undermine public trust in fact-checking. Moreover, broad definitions expand the scope of “misinformation” so far that misinformation researchers and fact-checkers relying on it risk being seen as ideological gatekeepers rather than neutral arbiters, particularly regarding politically polarized contexts where it may be up for debate whether a particular claim is misleading. (For example, consider the controversy surrounding fact-checking organization PolitiFact’s 2011 “Lie of the Year” story.)
The second challenge that critics raise concerns bias. There are at least two reasons to take seriously the thought that ideological bias is prevalent within fact-checking programs. First, the political leanings of those conducting or informing fact-checking efforts may influence their practices. A 2023 survey of 150 misinformation researchers published in the Harvard Kennedy School Misinformation Review highlights this issue. Of the 148 participants who responded to a question soliciting their own political views, 43 identified as “slightly left-of-center,” 62 as “fairly left-wing,” and 21 as “very left-wing,” for a total of 126 somewhere on the political left—85% of respondents (Appendix A, p. 15). While these figures concern misinformation researchers rather than fact-checkers, they suggest that many who shape the discourse around the relevant issues lean left. In addition, since fact-checkers often rely on research by misinformation scholars, it seems reasonable to worry that ideological biases would influence their fact-checking practices.
In addition, given the sheer volume of information circulating on social media, not all of it can be checked. Selection is necessary, but some argue that methodologies for selection are shoddy or opaque, and fact-checkers’ own values, preferences, and beliefs—including their politics—play a significant role in determining what content to review. This could lead to false and/or misleading content promoting right-leaning viewpoints being checked and flagged more often than similar content with a left-leaning valence. (Whether this is happening is contested. Yuwei Chuai and his collaborators think this needs further investigation.)
A third challenge has to do with complexity. Fact-checkers often rely on a naive epistemology, treating claims about complex matters as if their truth or falsity is straightforward and easily discerned. Joseph Uscinski and Ryden Butler present evidence that fact-checkers occasionally label causal claims and predictions about complex policies “misinformation” too readily, despite the uncertainties inherent in causal claims and predictions about these issues. Moreover, determinations of “the facts” about these issues may depend on the questions one asks, how a discussion is framed, and how content being evaluated is contextualized, which are likewise matters of value-sensitive judgment. And even when striving for neutrality, fact-checkers may assess claims that have values subtly or non-obviously encoded within them (for example, claims about whether a policy is “fair” or “effective”). In assessing these claims, fact-checkers may inadvertently endorse or reject those underlying values, which falls outside their purview as arbiters of accuracy. Overlooking these issues due to epistemological naivety can lead to mistakes, questionable judgment calls, and overconfidence on the part of fact-checkers, undermining their credibility—especially when fact-checkers disagree with one another.
Given all this, Zuckerberg’s charge that fact-checkers have been biased and undermined trust might well be right. However, some caveats are in order. First, it’s unclear to what extent the general concerns I’ve raised apply to Meta’s fact-checking program specifically. We don’t have the evidence needed to determine that. That said, it’s at least plausible that such issues arose on Meta given trends in misinformation research and fact-checking generally.
Second, none of this entails that misinformation isn’t real or harmful. As Alex noted in the last AEP blog post, misinformation is still a thing. Although finding a satisfactory definition that can be operationalized for research purposes and reliably applied by fact-checkers in an ideologically neutral way has proven difficult, it remains true that many individuals believe egregious falsehoods, share these beliefs with others online, and act in accordance with them. (Although we must be careful about drawing causal inferences here—more on this below.)
Third, a full discussion of this issue must note that bias is itself a vexed and contested concept, the application of which is often ideologically motivated. It can be—and certainly has been—weaponized for nefarious ends, as charges of bias can themselves be used to shut down legitimate discussion and deflect warranted criticism.
Finally, none of this entails that fact-checking is entirely without merit, nor that Meta was right to suspend its fact-checking program. Whether these further claims are true depends on answers to a host of thorny questions—in particular, whether fact-checking can be reformed to lower the likelihood of bias, and whether conceptual issues regarding misinformation research can be addressed successfully—as well as on facts about the severity of the problems that fact-checking was intended to address, the efficacy of fact-checking in addressing those problems, and the efficacy and feasibility of alternatives like the community notes approach.
In sum, my overall view on question (1) is mixed but leans in favor of Zuckerberg’s critique. The charge that fact-checking risks political bias, undermines trust, and might shut down legitimate discussion is valid in important respects. Let’s now turn to question (2): how impactful is Meta’s shift away from fact-checking likely to be?
Suppose that the above critique is correct: Fact-checking is often (and perhaps unavoidably) biased, undermines trust, and compromises free expression. Still, you might think: “Although fact-checking is far from perfect, we need it. After all, we’re in the middle of an epistemic crisis defined by widespread belief in misinformation, which has had disastrous consequences. Getting rid of fact-checking is just an admission of defeat in the misinformation war.”
This attitude is widespread, especially on the political left, where the narrative of an epistemic crisis has gained significant traction. However, it increasingly seems to rest on mistaken beliefs about the prevalence, power, and pull of misinformation and the efficacy of fact-checking.
Claims about the prevalence of misinformation online have likely been exaggerated. Likewise for claims about its power to persuade, change behaviors, and influence major events, which has not been clearly established. This has led some to rethink why misinformation has the pull that it does, focusing less on its supposed persuasiveness and more on how it aligns with individuals’ preexisting beliefs and motivations. Contrary to the idea that misinformation often hoodwinks otherwise epistemically innocent—if gullible—victims, Williams argues that many who believe and share misinformation are buyers in a “marketplace of rationalizations,” seeking out post-hoc rationalizations for their mistaken preexisting beliefs. On this model, misinformation gets its pull by satisfying market demand.
Turning to the efficacy of fact-checking, meta-analyses paint a mixed picture. Chan and Albarracín (2023) found that “attempts to debunk science-relevant misinformation were, on average, not successful” (p. 1514), although corrections fared somewhat better when the relevant issue was not politically polarized. Walter et al. (2020) reported a moderate effect size for corrections of misinformation regarding politics. However, they note: “the effects of fact-checking on beliefs are quite weak and gradually become negligible the more the study design resembles a real-world scenario of exposure to fact-checking. For instance, though fact-checking can be used to strengthen preexisting convictions, its credentials as a method to correct misinformation (i.e., counterattitudinal fact-checking) are significantly limited” (p. 367). Fact-checking health misinformation fares better, according to Walter et al. (2021)—but not when the original purveyor of misinformation was one’s peer. Finally, even when fact-checking succeeds, at least initially, in lowering a believer’s confidence in misinformation, its salutary effects may not last very long.
These limitations are compounded by the fact that fact-checking fails to address the root causes driving belief in misinformation—namely, the psychological and social forces generating demand for it in the first place. (And there’s some reason, although far from conclusive, to think that if fact-checking were to successfully restrict the supply of misinformation on a given platform, demand would just shift elsewhere.)
Taken together, the evidence of fact-checking’s limited efficacy and its failure to address the root causes of belief in misinformation suggest that Meta’s decision to abandon fact-checking is unlikely to lead to increased belief in misinformation or amplify its spread. The decision may also help restore some trust in Meta’s platforms by reducing perceptions of bias and censorship.
Taking a step back, the broader narrative of an epistemic crisis seems increasingly untenable. Falsehood, lies, and true-but-misleading assertions have always been a feature of politics and public discourse, and they don’t seem to be increasing in prevalence or power. What might be new is increased demand for misinformation due to a breakdown of trust in institutions that took place prior to the purported crisis, but fact-checking does not obviously address this problem.
So, to sum up, what should we make of the changes at Meta? Here’s my take.
First, we should acknowledge that fact-checking suffers from some deep conceptual difficulties and is likely to be biased, but whether this was true to a problematic extent in Meta’s particular case is unclear.
Second, on the assumption that the prevalence and power of misinformation really have been exaggerated, and fact-checking really is of doubtful efficacy, then:
- If Meta’s fact-checkers really had “loaded the cannons for claims of bias and censorship” in ways that undermined users’ trust, the company’s shift to a less heavy-handed approach to content moderation is reasonable.
- The shift from third-party fact-checking to a community notes model won’t have much discernible impact, good or bad, on the uptake and spread of misinformation on Meta’s platforms. My verdict: The sky probably isn’t falling.