When thinking about generative AI and its disruptive impact, text generation often comes up as the most representative example of this new chapter in technological advancement. Large language models (LLMs) are rapidly transforming sectors that have at their core text generation tasks such as writing, drafting or summarisation, and the online news industry has been challenged in adapting to these new tools since GPT (generative pre-trained transformer) models became known to the mass public in late 2022.
Broadly speaking, publishing industries have traditionally been early adopters of new technologies, with a long tradition of innovation that traces back to the invention of writing and evolved through the introduction of paper, printing presses, automated processes and, more recently, the digital revolution and online publishing. This has increasingly put the news industry in the middle of a tension between the need for innovation and for trustworthy, reliable and responsible information delivery–an opening to new possibilities but also serious challenges. For these reasons, journalism has become a relevant case-study to understand uses, implementations and limitations of LLMs in the workplace (Tseng et al., 2025). This blog post will provide an overview of how LLMs are being integrated in newsrooms’ workflows, covering opportunities, risks, why explainability and fairness are fundamental and other critical considerations for ensuring these powerful tools align with society’s collective values.
LLMs in Online News: What for?
With the advancement of technology, the online news industry has been faced with the challenge of adapting work routines to new tools and methods. This need for innovation has often led to various outcomes and receptions among the journalist workforce, who react differently to such changes depending on specifics such as the size of the newsroom, ownership structure and drivers for such innovations, like user needs and competition with other editors. As Tseng et al. (2025) note, “uptake of automation has historically been slow among journalists“, due to the potential disruption to workflows and deadlines. Today, even more, newsrooms struggle with limited resources while public demand for content continues to rise, often limiting and “trapping” journalists in a web of tedious tasks aimed at user engagement. It’s for these reasons that the uptake of LLM-based technologies in newsrooms is, as of now, mostly driven by their capacity to enhance efficiency and offer new, useful functionalities to the workforce.
As a result, the adoption of AI in newsrooms is increasing, with over 80% of newsrooms in North America currently leveraging AI, a substantial increase from just 37% in 2019 (Tseng et al., 2025), with LLMs helping more and more systematically with technical tasks, such as automating the creation of “template-ready” articles for sport or weather reports. Beyond basic automation, AI tools are increasingly being used to assist in digging through large volumes of public records, transforming raw data into interactive visualisations and streamlining tedious tasks such as data cleaning and preparation. Newsrooms are also experimenting with AI to improve discoverability, optimising headlines for audience engagement, and flagging potentially misleading texts for fact-checking. Furthermore, AI systems are being used to mine through newsroom archives, discovering patterns in large datasets.
Lastly, LLMs have changed not only how news is produced, but also how users consume it. Audiences can now rely on third-party news aggregators and retrievers to generate tailor-made reports and dive into a topic in a personalised and in-depth way. Similarly, more and more online newspapers are starting to implement their own LLM-based assistant: an example, among the first ones doing so, is the Washington Post, where a RAG (retrieval-augmented generation) chatbot has been implemented to help users navigate their knowledge base and articles pool (The Washington Post, n.d.).
Why Explainability and Fairness are Non-negotiable
Journalists often respond to these changes with a combination of curiosity and concern, frequently raising issues of fairness and explainability as central points of discussion.
Tensions are particularly high around the use of journalistic data to train AI models, with many news organisations raising concerns about tech companies profiting from content scraped from their platforms without any meaningful revenue-sharing mechanisms. These misaligned incentives lead to opposite responses, from collaborations with large AI companies to legal actions for copyright violation.
Other concerns from the creators side arise regarding the value of journalism itself: for example, when Il Foglio newspaper in Italy announced a new supplement named Il Foglio AI, “the world’s first newspaper made entirely with artificial intelligence” (Il Foglio, 2025), reactions from the international press were mixed, with Politico’s response as a particularly worried one, ironically pointing how: “It’s one thing to try to get rid of journalists who have actual reporting skills, who produce investigations for the public good on a daily basis and who know what their readers need to know. But irony? That can never be replaced. What’s next, AI humour columns?” (Poloni, 2025). Indeed, Il Foglio AI aims at providing a parallel voice to that of human journalists, an “AI Version” of all kinds of articles, from political comments to witty and humorous satire. It’s an interesting experiment that sheds light on the need for a meaningful disclosure of the technologies used and the alignment strategies that were adopted by the editorial board in implementing a similar tool.
Transparency is a core element in this, as users interacting with AI-generated content increasingly expect accurate and transparent information, especially when asking practical or socially relevant questions. Inaccurate or misleading responses erode trust and can have serious consequences, particularly in sensitive domains like politics and social issues.
It’s for these reasons that trust must be built upon more than factual correctness. It’s fundamental for an organic implementation of AI-generated content for users to be able to understand how and why an LLM produces a specific piece of text. Explainability goes in this direction, aiming at integrating features that offer insight (such as source transparency, score indicators and textual or graphic explanations) useful to make more sense of the “black box” nature of foundation models. These tools can allow users to evaluate the validity of responses and avoid blindly accepting answers from a system that can “provide incorrect responses with high confidence like a confident intern” (Leiser et al., 2023).
Equally relevant are concerns around algorithmic bias. LLMs, trained on massive datasets, tend to reflect the biases present in their sources (Cremaschi et al., 2023, Jones et al., 2025). This can result in stereotyping, underrepresentation of marginalised identities and minorities, or reinforcement of existing social hierarchies and power dynamics. Because these foundation models are designed for general-purpose use (Suresh et al., 2024), they often lack the contextual awareness that is necessary to navigate the social complexities and different identities that are fundamental to the social and political discourse at the core of the news industry.
Participation, Beyond Training
Addressing these issues requires more than technical fixes. It demands a critical rethinking of the assumptions embedded in the data itself (Weinberg, 2022), on how data is gathered, on the way models are trained and on how those assumptions shape what the models generate.
An alternative that goes in this direction comes from participatory design: public preferences and values must be integrated into the development process (Huang et al., 2024), keeping into consideration the needs of all actors involved at the different levels of the technical pipeline and the news ecosystem, from local and independent newsrooms to readers coming from marginalised and under-represented communities.
Research around possible participatory frameworks in the context of news is increasing, with some interesting proposals such as the Newsroom Tooling Alliance, a cooperative structure where members of news organisations collectively control an LLM specifically for journalistic tasks (Tseng et al., 2025), or the Collective Constitutional AI, a method for incorporating public input to establish ethical principles for LLMs (Huang et al., 2024).
Looking at the next steps for responsible LLM implementations in newsrooms, participatory design can help in the crucial challenge of enhancing journalists’ skills and public awareness, while reducing the risk of a systematic replacement and of biased representations, prioritising human values and AI alignment.
References:
Cremaschi, M., Menendez-Blanco, M., & De Angeli, A. (2023). Demo: ISOTTA – A Slow Exploration of Power Relations in Writing with Language Models.
Huang, S., Siddarth, D., Lovitt, L., & Ganguli, D. (2024). Collective Constitutional AI: Aligning a Language Model with Public Input.
Il Foglio. (2025, March 17). Il Foglio lancia un nuovo giornale: Il Foglio AI, un altro Foglio fatto con intelligenza. Il Foglio. https://www.ilfoglio.it/gli-speciali-del-foglio/2025/03/17/news/un-altro-foglio-fatto-con-intelligenza-7523278/.
Jones, B., Sigman, S. J., & Luger, E. (2025). Artificial Intimacy: Exploring Normativity and Personalization Through Fine-tuning LLM Chatbots.
Leiser, F., Eckhardt, S., Knaeble, M., Maedche, A., Schwabe, G., & Sunyaev, A. (2023). From ChatGPT to FactGPT: A Participatory Design Study to Mitigate the Effects of Large Language Model Hallucinations on Users.
Poloni, Giulia. “AI Saves Mankind, and Other Stories Definitely Not Brought to You by Robots.” POLITICO, 21 Mar. 2025, www.politico.eu/article/democracy-artificial-intelligence-media-newspaper-foglio/.
Suresh, H., Tseng, E., Young, M., Gray, M. L., Pierson, E., & Levy, K. (2024). Participation in the Age of Foundation Models.
The Washington Post. (n.d.). Ask The Post AI. Retrieved [date you accessed it], from https://www.washingtonpost.com/ask-the-post-ai/.
Tseng, E., Young, M., Aubin Le Quéré, M., Rinehart, A., & Suresh, H. (2025). “Ownership, Not Just Happy Talk”: Co-Designing a Participatory Large Language Model for Journalism.
Wang, W., Tseng, E., Young, M., & Gray, M. L. (2024). From human-centered to social-centered artificial intelligence: Assessing ChatGPT’s impact through disruptive events.
Weinberg, L. (2022). Rethinking Fairness: An Interdisciplinary Survey of Critiques of Hegemonic ML Fairness Approaches.Bas du formulaire.
Further reading/watching/listening:
Books & Articles:
Assan, R. (2024). Journalism from Print to Platform: The Impossible Shift from Analog to Digital. 1st Edition.
Videos & Podcasts:
Reuters Institute for the Study on Journalism Event “AI and the future of news 2025” https://www.youtube.com/watch?v=zy0d57NTE9U.

Generated by: Midjourney
Date: 27/06/2025
Prompt: “Generate a picture that captures and portraits the evolution in newsrooms and journalism adapting to recent advancements in AI, particularly LLMs adoption.”
