Enhancing the Credibility of Generative AI through Partnerships with Media

By Jorge Lavalle de Zamacona

In just a few years, Generative AI, specifically large language models (LLMs), have revolutionized content creation across multiple industries, including marketing, education, entertainment, and journalism. However, due to its reliance on training data, it has the potential to produce inaccurate or low-quality outputs that could be used, either inadvertently or intentionally, for misinformation (OECD, 2025). To address this underlying issue, robust partnerships between AI developers, trusted media outlets, and fact-checking experts are essential. If AI is to become the norm for content creation, its outputs must be rooted in verified facts and transparency.

Defining the Problem: The Dangers of Misinformation

According to NewsGuard’s Misinformation Monitor, AI tools reproduced false claims in about 27% of cases (Maitland & Sadeghi, 2024). For example, Apple’s AI news summary was pulled down for producing fake or inaccurate headlines, including a false statement claiming that the US Secretary of Defense nominee Pete Hegseth had been fired (Barrabi, 2025). This incident impacted major news organizations that relied on the service, such as the BBC and The Washington Post.

There is also the element of malice, with third parties deliberately using AI to create false news. For example, Freedom House reported that the Venezuelan government used AI-generated videos for propaganda purposes, including a fabricated video of former U.S. President Joe Biden making homophobic remarks (Satariano, 2023). Similarly, an AI-generated website based in Pakistan sparked a hoax about a Halloween-themed parade in Dublin this past October, leading thousands to take to the streets for an event that never actually existed (Davis, 2024).

Proposing a Solution: AI-News-Fact-Checkers Partnership

A strategic partnership between GenerativeAI developers, major media outlets, and disinformation experts would be key to addressing these challenges. The involved parties could include, but are not limited to:

  • AI developers: OpenAI, Google, Microsoft, Apple, Meta, Anthropic
  • News organizations: Reuters, Associated Press, BBC, The New York Times, CBC
  • Fact-checkers and disinformation experts: NewsGuard, Freedom House

How the Solution Would Work

Real-Time News Feeds and Licensing Agreements

In a similar way that AI companies license their technology through APIs, media outlets could do the same for developers. News organizations could provide real-time, verified news through a licensed API. This approach would ensure their intellectual property is respected and that news organizations are fairly compensated, while also granting AI systems access to credible journalism to power their models’ outputs.

This is particularly relevant as approximately 67% of top news sites restrict access to AI bots due to concerns over misuse of content (Brewster, Fishman & Glick, 2024). As a result, AI often turns to questionable sources for training data instead of relying on information verified by reputable organizations, causing what’s known as “AI slop,” which is the flooding of the internet with machine-generated low-quality information (Adami, 2024).

Training AI on Systemic Misinformation Patterns

Disinformation experts and media organizations can collaborate to create reliable training datasets for GenerativeAI that highlight common real-world misinformation patterns, enabling the AI to flag suspicious sources (NewsGuard, 2025). For example, NewsGuard’s monthly reports could be useful for identifying heavily politically charged claims. Additionally, AI could be trained to recognize websites with few or non-credible citations or references.

Transparent Source Attribution

AI companies need to incorporate clear citations for the information provided in their outputs, especially for news stories and public interest content. OpenAI has already implemented citations for its real-time web search (OpenAI, 2024), and disclaimers about the potential unreliability of AI-generated outputs are now common practice. However, this could be further improved by providing direct access to fact-checked training data. Additionally, establishing a board of best practices to oversee AI model training—comprising members of reputable news organizations and fact-checking experts—could help ensure greater accuracy and transparency.

Moving Forward: Making AI-Sourced Information Reliable

If GenerativeAI and LLM developers want to take the technology to the next level and ensure even more widespread adoption, they need to demonstrate its reliability. Partnering with trusted information sources and organizations committed to factual accuracy is an excellent way to achieve this. Such partnerships could eventually extend to organizations in public health and education, ensuring the reliability of outputs created by large language models and other generative AI systems.

Most groundbreaking technologies experience a period of resistance, after which norms are established, and they become widely accepted tools (Bauer, 1995). This happened with Wikipedia, Google, the internet itself, and even word-processing software. Now, it’s GenerativeAI’s turn. By building fair partnerships with news outlets and fact-checkers, AI can gain the credibility boost it needs to become the new paradigm for content generation.

References

Adami, M. (2024, November 26). AI-generated slop is quietly conquering the internet. Is it a threat to journalism or a problem that will fix itself? Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/news/ai-generated-slop-quietly-conquering-internet-it-threat-journalism-or-problem-will-fix-itself

Barrabi, T. (2025, January 16). Apple blasted after AI-generated news summary falsely claims Pete Hegseth was ‘fired’: ‘Wildly irresponsible’. New York Post. https://nypost.com/2025/01/16/business/apple-blasted-after-ai-generated-news-summary-falsely-claims-pete-hegseth-was-fired-wildly-irresponsible/

Bauer, M. (Ed.). (1995). Resistance to New Technology: Nuclear Power, Information Technology and Biotechnology. Cambridge: Cambridge University Press.

Brewster, J., Fishman, Z., & Glick, I. (2024, September 16). AI Chatbots Are Blocked by 67% of Top News Sites, Relying Instead on Low-Quality Sources. NewsGuard. https://www.newsguardtech.com/special-reports/67-percent-of-top-news-sites-block-ai-chatbots/

Brewster, J., Wang, M., & Palmer, C. (2023, August 24). Plagiarism-Bot? How Low-Quality Websites Are Using AI to Deceptively Rewrite Content from Mainstream News Outlets. NewsGuard. https://www.newsguardtech.com/misinformation-monitor/august-2023/

Davis, B. (2024, November 1). Dublin: Chaos as thousands turn up for AI ‘hoax’ Halloween parade that didn’t exist. The Independent. https://www.independent.co.uk/news/world/europe/dublin-fake-halloween-parade-ireland-ai-advert-b2639505.html

Leingang, R. (2024, September 12). X’s AI chatbot spread voter misinformation – and election officials fought back. The Guardian. https://www.theguardian.com/us-news/2024/sep/12/twitter-ai-bot-grok-election-misinformation

Maitland, E., & Sadeghi, M. (2024, November). NewsGuard monthly AI misinformation monitor of leading AI chatbots. NewsGuard. Retrieved from https://www.newsguardtech.com/special-reports/ai-tracking-center/

NewsGuard. (2025, January 13). Tracking AI-enabled Misinformation: Over 1100 ‘Unreliable AI-Generated News’ Websites (and Counting), Plus the Top False Narratives Generated by Artificial Intelligence Tools. https://www.newsguardtech.com/special-reports/ai-tracking-center/

OECD. (n.d.). Generative AI: Risks and unknowns. Retrieved January 22, 2025, from https://oecd.ai/en/genai/issues/risks-and-unknowns

OpenAI. (2024, October 31). Introducing ChatGPT search. https://openai.com/index/introducing-chatgpt-search

Panditharatne, M., & Hasan, S. (2024, October 21). How to rein in Russia’s evolving disinformation machine. TIME. https://time.com/7095506/russia-disinformation-us-election-essay/

Satariano, A. (2023, October 4). How generative AI is boosting the spread of disinformation and propaganda. MIT Technology Review. https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/

Virginia Tech News. (2024, February 22). AI and the spread of fake news sites: Experts explain how to counteract them. https://news.vt.edu/articles/2024/02/AI-generated-fake-news-experts.html

Jorge is a multiplatform writer and storyteller with over a decade of experience as an editor, copywriter, and screenwriter across North America. He is passionate about researching the integration of generative AI in the creative media industry and its long-term impact. He holds a Bachelor’s degree in Journalism from the University of Navarra in Pamplona, Spain, and a postgraduate certificate in Film and Multiplatform Storytelling from Humber Polytechnic in Toronto, Canada. Currently, he is pursuing a second postgraduate certificate as a Research Analyst at Humber. This article was a result of an assignment about social structure reflections in the Research in Society: Enterprise and Governments class that prepared and re-submitted for publication here.

Read Jorge’s award winning blog here.

Leave a Reply