Author – Eva Parashou
When we interact with a chatbot, ask a digital assistant for advice or rely on LLMs to summarise a long document, we are doing something profoundly human: we are trusting. Trust is part of what makes cooperation possible between people, but increasingly, also between people and machines. In the age of artificial intelligence (AI), and particularly with the rapid rise of large language models (LLMs), trust has become a central issue. It determines not only how we use these systems, but also how society accepts and regulates them.
Recent research shows that more than half of people worldwide express concerns about AI bias, misinformation and data privacy (Ipsos, 2023). These worries are not unfounded. LLMs can produce content that sounds confident but is factually wrong, and they often operate as opaque “black boxes”. In high-stakes areas such as politics or healthcare, the consequences of misplaced trust can be severe. Understanding how trust in AI and LLMs is built, challenged and maintained, is therefore not just a research question, it’s also a societal one.
The Trouble with the Many Layers of Trust in LLMs
Trust is far from simple. It is not a switch that can be turned on or off. It’s a complex, evolving relationship between two sides: the trustor (the person who decides to trust) and the trustee (the system or entity being trusted). In our interactions with LLMs, these roles translate into the human user and the LLM itself.
AI-specific definitions, such as Lee and See’s (2004) description of trust as “the attitude that an agent will help achieve an individual’s goals in a situation characterised by uncertainty and vulnerability”, capture the essence but miss the nuance, especially when it comes to trust in LLMs. They don’t fully reflect the complexity of the human–LLM relationship, which evolves within fast-changing and adaptive environments. In reality, people’s experiences, beliefs and values matter just as much as the system’s design, transparency and accountability, and all of these factors shift over time and across contexts.
Even after decades of research, trust remains a fragile and complicated notion in the world of AI and LLMs. Many early studies viewed it as something simple and one-directional, either trustor (people) trust the trustee (system) or they don’t . But this view misses the reality that trust is dynamic and constantly shaped by both sides of the relationship. On one hand, our own experiences, beliefs, and familiarity with technology evolve and on the other, LLMs themselves are complex sociotechnical systems that keep changing. Traditional models of trust (i.e. Mayer et al., 1995; Holton, 1994) were never designed for such fluid interactions. Most focus only on how science earns trust, without considering how humans and technology (i.e. machines) continuously influence one another. And although “human-in-the-loop” design (involving people directly in LLM systems) is meant to increase confidence, we still know little about how it actually shapes users’ trust. These challenges make it clear that building trusted LLMs requires a deeper understanding of the human–LLM trust ecosystem itself (Paraschou et al., 2025).
How Are We Trying to Understand Trust So Far?
Across disciplines, from management to psychology, several foundational frameworks have emerged and have been used to measure trust in LLMs. One of the most influential is the model by Mayer, Davis and Schoorman (1995), which defines trust through three attributes of the trustee: ability, benevolence and integrity. McKnight and colleagues (2002) extended this idea, emphasising institutional trust, our belief that organisations and systems themselves can be trustworthy. Meanwhile, Lee and See (2004) applied these concepts to automation, showing how interface design and feedback loops affect human confidence. Ghosh et al. (2001) studied trust in educational institutions, highlighting how perceived sincerity and expertise shape our judgments. More recently, Jacovi et al. (2021) and Liao and Sundar (2022) have reinterpreted these frameworks for the AI era, stressing vulnerability, communication and explainability.
All of these models provide valuable insights. Yet, most focus on one side of the relationship, either the trustor or the trustee, without capturing how trust dynamically evolves between the two parties.
New Ways of Understanding and Measuring
To address the limitations of earlier one-directional models, recent frameworks have begun to reflect the complexity of trust as it unfolds within dynamic, sociotechnical ecosystems. One such example is the Bowtie Model of Trust in LLMs (Paraschou et al., 2025). Inspired by the structure of a bowtie with the two sides connected through a central knot, this model captures how trust emerges from the interplay between users and LLMs. On one side lies the trustor, representing the human user, whose trust is shaped by contextual factors such as demographics, familiarity with AI, personal beliefs and previous experiences. On the other side stands the trustee, the LLM itself, with systemic elements like transparency, competence, human involvement and the credibility of its developers or institutions. The knot at the center functions as an interaction space, revealing how these factors influence one another through continuous feedback loops. This novel model thus allows researchers to examine trust as a living, adaptive process, shaped simultaneously by human and technological forces across changing contexts.
A complementary perspective is offered by the Trustworthiness Assessment Model (TrAM) (Ferrario & Loi, 2022), which provides a structured way to evaluate AI systems through measurable dimensions such as competence, transparency, reliability and ethical compliance. While the bowtie model focuses on understanding the dynamic relationships that produce trust, TrAM concentrates on assessing whether an AI system demonstrates the qualities needed to sustain it. Together, these frameworks extend the study of trust beyond static, one-directional approaches, aligning with the realities of complex sociotechnical ecosystems where humans, institutions and intelligent systems continuously co-evolve.
The Impossibility of Universal Trust and the Road Ahead
As research on trust in LLMs advances, one truth becomes evident: universal trust is impossible (Paraschou et al., 2025). Trust is always contextual, shaped by who the user is, what the system is doing, and how transparent or interpretable it appears. For example, a political analyst, a journalist and a voter may all use the same AI system but form different judgments about its reliability. Even expertise doesn’t guarantee confidence. In fact, deeper understanding often brings awareness of LLMs’ probabilistic nature and their tendency to sound certain while being wrong. The opacity of these models, combined with their lack of human intention, means complete trust may always be out of reach. What we should aim for instead is calibrated trust, confidence that aligns with the system’s actual capabilities, limitations and context of use.
Looking ahead, fostering appropriate trust in LLMs will require collective effort. Policymakers must promote AI literacy, helping citizens understand what these systems can and cannot do. Developers and organisations should prioritise transparency and human oversight, designing models that explain their reasoning and acknowledge uncertainty. And for the public, maintaining a balance between curiosity and healthy skepticism will be key. Trust cannot simply be coded into an algorithm; it must be earned and sustained through openness, accountability and ongoing dialogue between humans and machines.
References
Ferrario, A., & Loi, M. (2022). How explainability contributes to trust in AI. In Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (pp. 1457–1466).
Ghosh, A. K., Whipple, T. W., & Bryan, G. A. (2001). Student trust and its antecedents in higher education. The Journal of Higher Education, 72(3), 322–340.
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434.
Ipsos. (2023). Global AI 2023 Report.
Jacovi, A., Marasović, A., Miller, T., & Goldberg, Y. (2021). Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 624–635).
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80.
Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20(3), 709–734.
McKnight, D. H., Choudhury, V., & Kacmar, C. (2002). The impact of initial consumer trust on intentions to transact with a website: A trust-building model. The Journal of Strategic Information Systems, 11(3), 297–323.
Paraschou, E., Michali, M., Yfantidou, S., Karamanidis, S., Kalogeros, S. R., & Vakali, A. (2025). Ties of Trust: A Bowtie Model to Uncover Trustor–Trustee Relationships in LLMs. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’25). ACM.
Further Reading/Watching/Listening:
Blog Posts:
Trust Me, I’m an Algorithm – On Trust, Trustworthiness and the Trouble with Both https://alignai.eu/2025/04/29/trust-me-im-an-algorithm-on-trust-trustworthiness-and-the-trouble-with-both/.
Websites:
Trust in Science? Inspiring and Anchoring Trust in Science, Research and Innovation

Image Attribution
Generated by: ChatGPT
Date: 17 October 2025
Prompt: “Create an image with a computer machine and a human having a handshake.”
