The MoralPLai project partners bring diverse perspectives: philosophical, technical and artistic, to shed light on the research topic, with each contributing valuable insights into the project’s key areas. Meet Johannes Betz, Professor of Autonomous Vehicle Systems at the TUM School of Engineering and Design and read his reflections on the core research topics.
What is your role in the MoralPLai project?
I have had the privilege of mentoring Dr. Poszler and supporting the MoralPLai project during her application to the TUM/Friedrich Schiedel Fellowship for Technology in Society. My role has involved providing technical and conceptual insights, particularly in understanding the potential and limitations for ethical reasoning and decision-making in autonomous systems and robots. Additionally, I have contributed to discussions on integrating AI ethics frameworks and assessing moral reasoning challenges in AI models.
What is the biggest misconception about LLM-based chatbots?
One of the most common misconceptions is that LLM-based chatbots possess an inherent sense of morality or an objective understanding of ethical principles. In reality, these models do not “reason” about morality in a human-like way; they generate responses based on probabilistic patterns in their training data. While they can emulate ethical discourse convincingly, they lack true moral intuition, intent, or the ability to understand the long-term implications of their advice. This can lead to inconsistencies and unintended biases in their responses.
Would you consider depending on an LLM-based chatbot for guidance in a moral dilemma?
While LLM-based chatbots can serve as valuable tools for exploring different ethical perspectives, I would not consider them reliable standalone sources for moral guidance. Ethical decision-making is highly contextual and deeply intertwined with human values, emotions, and societal norms—elements that LLMs do not fully grasp. Instead, these models should be viewed as facilitators of moral reflection, helping individuals examine different viewpoints rather than providing definitive moral answers.
What technical innovations could contribute to improving the capability of LLM-based chatbots to provide moral guidance in the future?
Several innovations could enhance the ability of LLMs to engage in more meaningful moral reasoning. Firstly, we need more Explainability & Justification Mechanisms. We need to incorporate models that can provide explicit reasoning for their moral guidance, linking their responses to well-established ethical theories or empirical case studies. Secondly, some Context Awareness & Memory can enhance current models to achieve recall and maintain coherence and consistency in moral discussions across multiple interactions. Thirdly, I think we need more Human-in-the-Loop Approaches. By developing hybrid systems where AI-generated ethical considerations are reviewed and validated by human ethicists or domain experts, we can improve the models before releasing them to the users. Finally, what is done in most machine learning-based models today is to develop Bias Mitigation Techniques. By taking care of both the data aggregation and the model implementation, we can improve fairness and reduce harmful biases in AI-generated moral reasoning.
Visit the MoralPLai Project webpage to learn more and stay tuned for more updates.
