MoralPLai: A Creative Method for Communicating and Co-shaping AI Ethics Research2024-04-29T10:19:22+02:00

MoralPLai: A Creative Method for Communicating and Co-shaping AI Ethics Research

Clinical decision-support systems can help healthcare professionals in allocating limited medical resources such as ventilators or donated organs. Similarly, algorithms may be employed in the criminal justice system to determine sentences or verdicts for offenders. The introduction of ChatGPT, capable of providing some sort of moral guidance upon request, could suggest that individuals increasingly (have the ability to) outsource complex, ethical decision-making to AI systems, even in their private lives. Given this trend and the fundamental role of ethical decision-making in shaping morale, scholars have underscored the potential issues of blindly trusting these systems, prompting calls for inquiries into the effects of pertinent AI systems on human ethical decision-making and subsequent societal outcomes.

To better understand and proactively shape how AI systems affect ethical decision-making, it is crucial to involve affected stakeholders in pertinent scientific inquiry and technological development. Opening up scientific debates beyond academic silos requires innovative methods and creating spaces for collaboration between civil society and scientists. In this endeavor, arts – an important reference for social knowledge and inclusion – can become a key enabler to facilitate human-centric, participatory discussions around AI design.

This project will implement a creative approach to conducting, educating, and communicating AI ethics research through the lens of the arts (i.e., research-based theater). The core idea revolves around conducting qualitative interviews on the impact of AI systems on human ethical decision-making. It focuses specifically on exploring the potential opportunities and risks of employing these systems as aids for ethical decision-making, along with their broader societal impacts and recommended system requirements. Generated scientific findings will be translated into a theater script and (immersive) performance. This performance seeks to effectively educate civil society on up-to-date research in an engaging manner and facilitate joint discussions (e.g., on necessary and preferred system requirements or restrictions). The insights from these discussions, in turn, are intended to inform the scientific community, thereby facilitating a human-centered development and use of AI systems as moral dialogue partners or advisors.

Key research questions include:

  • How can AI systems impede or support humans’ ethical decision-making? What system requirements are crucial for its responsible development and use?
  • How can research-based theater effectively engage a broader audience in the inquiry of this investigation?

Overall, this project should serve as a proof of concept for innovative teaching, science communication and co-design in AI ethics research, thereby paving the way for similar projects in the future.

Funding:

The project team gratefully acknowledge the financial support from the Notre Dame-IBM Tech Ethics Lab, the Friedrich-Schiedel-Fellowship Program at the TUM Think Tank and the TUM School of Social Sciences and Technology, and the TUM Global Incentive Fund.

Research Output:

Notre Dame – IBM Technology Ethics Lab Awards Nearly $1,000,000 to Build Collaborative Research Projects between Teams of Notre Dame Faculty and International Scholars

News & Updates

Principle Investigators

Christoph Lütge
Christoph LütgeTUM School of Sciences and Technology

Researchers

Franziska Poszler
Franziska PoszlerTUM School of Social Sciences and Technology
Anastasia Aritzi
Anastasia AritziSenior Communications Consultant

External Partners

Carys Kresny
Carys KresnyDepartment of Film, Television, and Theatre, University of Notre Dame
Johannes Betz
Johannes BetzTechnical University Munich, School of Engineering and Design
Go to Top