It is my pleasure to introduce this guest column by two of UCSD’s finest: Matthew Allen, a medical student and John Ayers, a professor with expertise in digital health, AI and communication. Dr. Ayers’ team has lead important work in the last year on ChatGPT among other topics. Here they discuss Community Notes— a new feature on X (Twitter) that is not censorship, not fact checking, but a form of crowdsourced clarification.
Vinay Prasad MD MPH
"Misinformation" has been at the forefront of public debate for more than a decade. The debate has now entered medicine and public health where complaints about misinformation are plentiful but empirical strategies to solve the problem are rarer than unicorns.
Consider this: 881 articles across the JAMA Network mention “misinformation,” yet not one provides empirical evidence on how to counter it. This glaring gap encouraged us to lead the first independent evaluation of X’s (formerly Twitter’s) Community Notes with our colleagues at UC San Diego and Johns Hopkins which we published in JAMA last year.
Community Notes is a crowdsourced feature that empowers lay users to propose appending context to potentially misleading X posts. When enough contributors from diverse perspectives agree that a note is helpful, it is shown in full along the original post irrespective of who authored the post.
Our research, which analyzed all notes related to COVID-19 vaccines during the program's first year, found that Community Notes provided accurate information in conjunction with misleading posts that garnered hundreds of millions of views. As opposed to other methods which merely apply generic warning labels, notes counter misinformation precisely when it matters most.
There’s new cause to revisit our findings and their implications. Following other media companies implementing X’s approach, Meta CEO Mark Zuckerberg announced that Facebook, Instagram, and Threads will adopt a community-driven system similar to Community Notes, replacing their longstanding fact-checking program.
That factchecking program had come under scrutiny during the pandemic through work by Alyson Haslam and Vinay Prasad, of Sensible Medicine. They found the program relied excessively on fact checkers who were active on Twitter, disproportionately so. In one case, the fact checker had already tweeted their opinion of the article, and was asked to fact check it. This raised the question of whether the fact checkers were performing an objective service or merely looking for people who already have decided they don’t like a certain view.
Unlike traditional media which has criticized Facebook’s shift away from fact checking, we welcome this move—it signifies progress in the fight against misinformation and aligns with practices supported by real-world evidence. However, progress comes with responsibility. For Meta’s new approach to succeed, it must adopt key features of Community Notes that we observed in our evaluation.
Transparency is Non-Negotiable
One of the most compelling aspects of X’s Community Notes program is its radical commitment to transparency. Unlike traditional content moderation systems that operate behind closed doors, Community Notes is fully open-sourced. This means the public can access the algorithms, data, and processes that underpin the system. This transparency is not just a technical feature—it’s a deliberate strategy to build trust, accountability, and collaboration. Meta must follow suit and further make post level data available for noted content, given X makes all their underlying post data available and Meta does not presently share any data outside of public Facebook groups.
Strictly Define Misinformation
The 97.5% accuracy agreement between X’s community notes and medical experts we observed stemmed in part from Community Notes narrow, objective focus on misinformation. Community Notes targeted clear factual inaccuracies, such as false claims about unsubstantiated adverse vaccine effects or conspiracy theories like 5G chips in vaccines to control your mind.
In contrast, Meta’s historical model has suppressed contentious but reasonable debates that should remain a part of public discourse. This "less is more" approach by Community Notes not only preserved trust but also scaled effectively—the noted posts we studied were potentially viewed hundreds of millions of times even though we studied a narrow topic during the first year of the program.
Trust the Public
Community Notes is entirely controlled by the public. Even Elon Musk cannot alter or remove a Community Note, a fact underscored by instances where he himself was subject to correction. Its accuracy rivaling that of medical experts is not accidental; it reflects how wisdom can emerge from diverse, independent crowds when appropriately empowered. In contrast, Meta has historically relied on conflicted expert opinions to make decisions about how to identify and address misinformation. By relying on a large group of lay users, the best evidence in X’s Community Notes rises to the top.
Control for Ideological Biases
One of the most critical features of X’s Community Notes is its deliberate design to address ideological bias, ensuring that the system remains balanced and fair across the diverse spectrum of users. This approach is critical because misinformation—and its correction—often intersects with deeply polarizing topics like politics. Specifically, notes are required to receive positive ratings from authors with competing opinions. Notes, for example, addressing posts about abortion will likely be written, edited, and/or evaluated by pro-life and pro-choice X users. This mitigation of ideology means notes are more likely to be derived from evidence, not opinion.
Teach, Don’t Censor
Censorship often backfires, as one of us wrote in the American Journal of Public Health, creating a boomerang effect where suppressed content becomes more desirable, and those censored are further isolated from public discourse, limiting their exposure to contradictory evidence. Community Notes avoids this by treating misinformation as a teachable moment, allowing users to see flagged posts alongside accurate notes. This transparency fosters learning rather than suppression, potentially helping users to recognize patterns in misinformation and become more resilient against being misled. By focusing on teaching instead of censoring, Community Notes empowers the public to critically evaluate information, promoting an enduring and collaborative fight against falsehoods.
Embrace the Scientific Method
The beauty of the scientific method lies in its impartiality: it sets aside assumptions and opinions, instead relying on data, experimentation, and reproducibility to uncover the truth. For example, despite claims of an infodemic of misinformation by a WHO executive, objective evaluation suggests the COVID-19 pandemic coincided with an increase in reliable information. For systems like Community Notes to truly serve the public, they must be rigorously studied and refined. Meta, and other platforms following X’s lead, must open their systems to scientific scrutiny. This challenge includes public health and medical scientists. Complaining about misinformation is easy—creating and/or evaluating evidence-based interventions is the challenge we must collectively engage with. For example, we found a paucity of notes on X in non-English languages and could not ascertain how many views of misleading posts happened prior to a note’s appearance. Others worry that a requirement for agreement between ideologically diverse contributors may leave misleading posts on controversial topics un-noted. Iterative improvement guided by rigorous scientific evaluation could improve weak points of crowdsourced strategies.
A Path Forward
Misinformation is a complex and evolving challenge, but mitigating strategies are within reach. This is not about optics—it’s about protecting public health. X’s Community Notes has shown what’s possible when transparency and innovation intersect. Meta’s embrace of a similar system is encouraging, but their success hinges on adopting the core principles that make Community Notes work and a willingness to go further by embracing the features that empowers X’s strategy. Let’s ensure this effort isn’t just another headline but a true leap forward in the fight against misinformation.
Matthew R. Allen is a medical student at UC San Diego with interests in health policy and informatics. John W. Ayers is a computational epidemiologist focussed on getting the public back in public health and harnessing technologies for healthcare. He is on the faculty at UC San Diego.
“Consider this: 881 articles across the JAMA Network mention “misinformation,” yet not one provides empirical evidence on how to counter it.”
That is the problem. How do you determine what is misinformation? During the recent COVID pandemic most of the misinformation came from Big Pharma and their facilitators, the CDC and FDA. Any opinion, even from independent experts, was labeled as misinformation. “such as false claims about unsubstantiated adverse vaccine effects” It turns out most of the “conspiracy theories” were correct. These were not false claims, but were egregiously hidden from the public by the government with the support of social media, crushing all honest, experienced clinicians.
And they say “trust the science, because it is impartial”?. Nothing could be further from the truth. I am a well-recognized research scientist with more than 56 publications. Yet most medical research is controlled by Big Pharma and is biased using one of 4 methods: Suppression of all negative date; Study design to only show benefits; Using relative benefits (meaningless) instead of absolute benefits; and using association studies to prove cause and effect.
“trust the public”? Often 90% of the public is wrong, based on a society where our “truth” is really indoctrination to a political agenda, such as climate change. The editor in chief of the NEJM recently resigned because she could no longer in good conscious publish all the mutilated and biased research coming from the so-called medical experts. COMMUNITY NOTES appears to be just another, perhaps more sophisticated, approach to censoring the ultimate truth.
Now I just need to learn how to use "X," any tutorials for old guys like me.