Franziska Poszler

Photo Credits: Hanna Gerischer

1. How did the idea for the MoralPLai project first emerge?

A few years ago, our research team was approached by several TV stations interested in producing a documentary about our work within the ANDRE project. It was the first occasion on which I engaged in public outreach beyond teaching, academic conferences and scholarly circles. In academia, we often live by the mantra “publish or perish”, which means our focus tends to stay on sharing research with peers and producing articles.

However, while working on the documentary, I realized two things: first, how challenging it can be to translate our work into accessible, non-academic language; and second, that the research we conduct at the Institute for Ethics in Artificial Intelligence – and within the field of Responsible AI in general – cannot remain confined to academic silos. Our insights need to reach the wider public, perhaps even primarily them.

This experience motivated me to explore out-of-the-box and engaging approaches to science communication. As I looked into alternative methods, I came across academic work on research-based theater. The fact that there was already a foundation of scholarship to build on – combined with my personal interest in theater and the medium’s unique capacity for direct, personal interaction with audiences – made it compelling to test this method out. After submitting several grant proposals and bringing together excellent project partners as well as the best team one could wish for, the project was finally able to launch and turn into reality…

Dr. Franziska Poszler

Photo Credits: Hanna Gerischer

2. What is the goal of the MoralPLai project?

The MoralPLai project is built on two main pillars: AI Ethics research and science communication, bridged through the use of the arts. In the research pillar, we examine the role of AI chatbots as moral dialog partners and assess their potential benefits, risks and implications for responsible design and use. On the science communication side, our goal is to bring these insights to a broader public – not through traditional academic articles, but via an interactive theater performance. Through this artistic approach, we aim to promote AI literacy and invite the audience to participate actively in the research process. Our intention was, and still is, not to make knowledge transfer a one-way process from academia to society, but a reciprocal exchange that also brings insights from society back into academia.

Dr. Franziska Poszler

Photo Credits: Hanna Gerischer

3. Were there any particularly surprising findings that emerged through your research or the performance process?

Regarding the research component, one challenge – though not an unexpected one – was the speed at which the technology evolved. As new AI chatbot models emerged, we continually revised our findings to match their latest characteristics. In our early expert interviews, for example, we learned that certain models tended to give rather one-sided advice, offering no alternative viewpoints. Later, however, updated versions of these models began offering more diverse perspectives in response to the same prompts. Observing how these responses shifted over the course of the project was particularly interesting.

Regarding the playwriting process, our initial concern was finding a balance between accurately representing the research data, conveying the findings clearly and still maintaining artistic quality. What surprised me was that developing the theater script and shaping the narrative followed a structure somewhat similar to writing an academic article. The script we ultimately created – which, once published, you’ll notice resembles a hybrid between a theater script and an academic paper – reflects our attempt to strike the right balance between fiction and reality, as well as between entertainment and information.

Dr. Franziska Poszler

Photo Credits: Hanna Gerischer

4. If you had to keep just one sentence from The Third Voice, which one would it be, and why does it resonate with you?

The Third Voice, the artistic outcome of the MoralPLai Project, was presented on May 22, 2025, at the Amerikahaus in Munich. At the heart of the story was aithona, a chatbot that stepped into the conversational gaps where a “third voice” was absent or urgently needed. The play followed two parallel storylines: a doctor on trial after relying on an AI chatbot to counsel a terminally ill patient and the doctor’s estranged teenage daughter, who seeks emotional support from the same chatbot. Set across a hospital, a courtroom and a family home, the performance invited the audience to consider how such tools can both support and undermine human ethical decisions, depending on how critically and responsibly they are designed and used.

Selecting one is impossible, so here are a few of my favorite sentences:

AITHONA: “You prompt. I respond within the boundaries of my parameters. And your framing.”

Aithona’s line highlights that chatbot outputs depend primarily on the data they are trained and fine-tuned on, as well as the prompts they receive. Because every part of this process is shaped by humans – developers who build the system, the human-generated material that forms its training data and ultimately the end user who formulates the prompt – the chatbot’s behavior is, at its core, human-determined. This means we must think critically about how we design these systems and acknowledge our responsibility for the outcomes they produce.

E13: “I worry about people relying on something without fully understanding it: This is the Promethean risk, right?”

In the theater script, we incorporated verbatim excerpts from our interviews with subject-matter experts, such as this quote from expert E13. Today, many individuals turn to AI chatbots to discuss highly sensitive, personal decisions and even to seek mental-health support. The systems were not originally intended to serve this role as they operate through statistical patterning rather than genuine comprehension, yet this crucial fact seems to fade into the background. In other words, the pace at which the public embraces these systems is outstripping our understanding of their proper use and the risks they entail.

DR: “It doesn’t replace me. I don’t just obey. It’s INFORMATION.”

ENG: “Stop outsourcing your brain. It’s not a GPS and it never claimed to be.”

These two statements contrast two very different ways of engaging with AI chatbots. The doctor employs the system as a tool: an additional source of information and decision support, while retaining both decision ownership and deliberative judgment. The engineer, by contrast, warns that some people will follow the chatbot’s output blindly, without engaging in their own reasoning. By placing these two perspectives side by side, we highlight that there are both responsible and irresponsible modes of interacting with AI – and that critical engagement is essential.

Dr. Franziska Poszler

Photo Credits: Hanna Gerischer

5. Finally, what is your vision and hope for the MoralPLai project after the performance? Where would you like to see it go next?

Looking ahead to 2026, we will have several post-production tasks to complete. These include creating an after-movie of the performance to give the public a sense of the performance and to showcase selected scenes, publishing three articles – the expert interview study that informed the play, the theater script itself and the audience impact study – as well as preparing a comprehensive project report summarizing the entire project and our key findings. We hope to share all of this with you soon!

Beyond that, I very much hope that the premiere of The Third Voice on May 22, 2025 will not be the last time we perform this piece. We are eager to build collaborations with academic institutions, artistic organizations, companies, and other interested partners to bring the performance to new contexts – be it another staged production, reading workshops, a film adaptation, interactive teaching modules, or other formats.

Interested to learn more? Visit the MoralPLai Project webpage and stay tuned for updates.

The MoralPLai team welcomes opportunities for collaboration. If you are interested in hosting The Third Voice, whether through a script-reading workshop, a collaborative research initiative or through other formats, please contact the Project Lead, Dr. Franziska Poszler (franziska.poszler@tum.de).