By Rin Hrahsel
Social media platforms have evolved into fertile grounds for the rapid spread of misinformation in today’s digital age. Among these platforms, Twitter stands out due to its extensive user base and real-time information-sharing capabilities. The proliferation of misinformation on Twitter poses a significant challenge, as false or misleading information can quickly gain traction and shape public opinion. This issue was notably pronounced during the COVID-19 pandemic, during which a multitude of erroneous claims about treatments, vaccines, and transmission flooded Twitter, leading to confusion, potentially compromising public health (CTVNews, 2022).
In recent years, the concerning surge in misinformation on social media platforms has prompted various stakeholders to explore potential solutions. A study conducted by the Massachusetts Institute of Technology (MIT) sought to evaluate the efficacy of fact-checking labels in countering misinformation (MIT News Office, 2018). The findings of this study demonstrated that when users encountered fact-checking labels alongside false information, they were less inclined to believe in and spread the misinformation. Fact-checking plays a pivotal role in curbing the amplification of falsehoods and equips users with the tools needed to make well-informed decisions.
Addressing the menace of misinformation on social media requires novel and innovative approaches. One such approach involves harnessing the potential of “crowd fact-checkers” – ordinary individuals entrusted with the task of verifying the accuracy of information (Drolsbach & Pröllochs, 2023). Several studies have shown that these crowd fact-checkers possess the ability to accurately assess the veracity of social media content. New data points indicate that crowd fact-checkers, in their community-based fact-checking role, focus more on scrutinizing posts from popular accounts with a substantial following, while expert fact-checkers generally target posts from less popular users. Additional findings also suggest distinct dissemination patterns for various types of misinformation encompassing factual errors and manipulated media.
In a recent research endeavour, scholars endeavoured to bridge this knowledge gap by investigating the dissemination patterns of misleading and non-misleading posts that were subject to community-based fact-checking methods (Drolsbach & Pröllochs, 2023). To undertake their analysis, the researchers leveraged a dataset from Twitter’s Birdwatch program, a community-driven fact-checking initiative. This approach marked a departure from earlier studies that primarily focused on misinformation catalogued on fact-checking websites like Snopes. Notably, the study unveiled that misleading posts, after being fact-checked by the community, exhibited reduced sharing rates in comparison to their non-misleading counterparts. Specifically, misleading posts garnered 37% fewer retweets than their non-misleading counterparts (Drolsbach & Pröllochs, 2023).

To validate the credibility of community-generated fact-checking methods, a user study was conducted, revealing that users generally aligned with the fact-checks generated by the community (Drolsbach & Pröllochs, 2023). These findings serve to enhance our comprehension of the dynamics surrounding the spread of misleading and non-misleading content on social media platforms. Furthermore, they underscore the importance of meticulous sample selection when scrutinizing misinformation within these contexts.
In response to encountering misleading content, users are empowered to report it for fact-checking, fostering a collaborative and decentralized approach to combating misinformation. A notable illustration of such efforts is Twitter’s Birdwatch initiative, which empowers users to append contextual information to tweets and evaluate their accuracy. Additional insight from the Stanford Social Media Lab’s research (Stanford University, 2019) highlights the efficacy of fact-checking labels in diminishing the dissemination of false news. The research also emphasizes the need for scaling up fact-checking endeavours, ensuring effective moderation to prevent misuse or mitigate bias, and manage potential partisan disagreements in the fact-checking process, all of which warrant further exploration.
The Stanford study contributes valuable insights into the propagation dynamics of misleading and non-misleading posts on social media platforms. It underscores the significance of judicious sample selection when investigating misinformation and underscores the necessity of studying the diffusion of fact-checked content within the online community. The data provide a deeper understanding of these dynamics, equipping researchers and practitioners to devise more potent strategies to counter misinformation and facilitate the dissemination of accurate information.
References and Further Reading
CTVNews. (2022, November 29). Twitter ends enforcement of Covid misinformation policy. https://www.ctvnews.ca/health/coronavirus/twitter-ends-enforcement-of-covid-misinformation-policy-1.6173672
Drolsbach, C., & Pröllochs, N. (2023, March 24). Diffusion of community fact-checked misinformation on Twitter. arXiv.org. https://arxiv.org/abs/2205.13673
MIT News Office. (2018, March 8). Study: On Twitter, false news travels faster than true stories. Massachusetts Institute of Technology. https://news.mit.edu/2018/study-twitter-false-news-travels-faster-true-stories-0308
Stanford University. (2019, November 18). High school students are unequipped to spot “fake news.” Stanford News. https://news.stanford.edu/2019/11/18/high-school-students-unequipped-spot-fake-news/
Rin Hrahsel is an aspiring content strategist and research analyst skilled in marketing, advertising, qualitative and quantitative research methods and a current graduate of Humber College’s Research Analyst Program. His digital marketing skills are an asset, and with hands-on retail experience at Starbucks, he works well in fast-paced environments providing utmost quality to customers.