Ongoing and Completed Thesis Projects
What are the Best Practices for Using Virtual Reality Simulations for Education?
Student: Begüm Köksal
Bachelor: Technologie- und Managementorientierte Betriebswirtschaftslehre
Supervision: Auxane Boch, Stephan Bantscheff
Assessing Indigenous Digital Sovereignty Elements in Immersive Reality Projects: Case Study Approach on Indigenous Communities
Student: Yosef Indra
Master: Management and Technology
Supervision: Auxane Boch, Sofie Schönborn
Responsible AI Governance: From Practice to Business Operationalisation
Student: Manuel Jiménez Mérida
Master: Executive MBA in Innovation & Business Creation
Supervision: Franziska Poszler, Prof. Dr. Christoph Lütge
Analysis of the Status Quo of Autonomous Shuttles in an International Comparison
Student: Adeyinka Adebakin
Master: Science and Technology Studies
Supervision: Franziska Poszler, Prof. Dr. Christoph Lütge
Unsupervised Pete: VR Experience to Learn About LLM
Student: Daniel Saad
Master: Politics and Technology
Supervisors: Auxane Boch, Prof. Dr. Christoph Lütge
Urban AI and Ethics: An Analysis of the GOUAI Repository focusing on Governance and Shared Responsibility
Student: Angelin Panjaitan
Bachelor: Management & Technology
Supervision: Dr. Catarina Fontes, Prof. Dr. Christoph Lütge
Submitted: 12 November 2024
Development and Implementation of Artificial Intelligence for Moderating Mass Discussion Platforms
Student: Aruzhan Zadanova
Bachelor: Information Systems
Supervision: Dr. Catarina Fontes, Prof. Dr. Christoph Lütge, Prof. Dr. Georg Groh
Submitted: 12 November 2024
The Use of LLMs and Urban Digital Twins for Public Participation: Exploring Tools and Addressing Implications for Citizens and Society
Student: César Muro Lauroba
Master: Management & Technology
Supervision: Dr. Catarina Fontes, Prof. Dr. Christoph Lütge
Submitted: 15 October 2024
Exploring Emerging Approaches and Ethical Implications of Personal Digital Twins: Insights from the Quantified Self Community
Student: Anna Dariol
Master: Reset (Responsibility in Engineering, Science, and Technology)
Supervision: Dr. Judith Igelsböck, Dr. Catarina Fontes
Submitted: 14 October 2024
The Code Conundrum: Enhancing AI Governance Frameworks for Medical Devices
Student: Dion Dcosta
Master: Politics and Technology
Supervisor: Auxane Boch, Edmund Balogan, Prof. Dr. Christoph Lütge
Submitted: 14 May 2024
Immortality Through AI?: Ethical Considerations in Prolonging Life Through Human Digital Twins
Student: Nicole Rogalla
Bachelor: Management & Technology
Supervision: Dr. Catarina Fontes, Prof. Dr. Christoph Lütge
Submitted: 30.06.2024
Predicting and Preventing Civil Violence with Urban Digital Twins: An Agent-based Simulation Exploring Behavioral Nudging Under the Gaze of Mass Surveillance
Student: Sarah Shtaierman
Master: Elektrotechnik & Informationstechnik (MSEI)
Supervision: Dr. Catarina Fontes, Prof. Dr. Christoph Lütge, Prof. Dr. Klaus Diepold
Submitted: 02.04.2024
Understanding Risks: The Impact of Risk Perception on Trusts and Acceptability of AI-Systems
Student: Abdullah Ejaz Ahmed
Master: Politics & Technology
Supervision: Auxane Boch, Prof. Dr. Christoph Lütge
Project: Towards an Accountability Framework for AI Systems
Submitted: 20.10.2023
Ethical Considerations of AI in Mental Health
Student: David Burger
Master: Management & Technology
Supervision: Auxane Boch, Prof. Dr. Christoph Lütge
Submitted: 10.10.2023
Defining Smart Cities through Applied Solutions: The Role of Artificial Intelligence in the Future of Urban Mobility
Student: Chenyang Zhai
Bachelor: Information Systems
Betreuer/in(nen): Dr. Catarina Fontes, Prof. Dr. Christoph Lütge, Prof. Dr. Georg Groh
Submitted: 15.09.2023
Ethical Considerations of AI Polygraphs
Student: Paloma Laye
Master: Politics & Technology
Supervision: Auxane Boch, Prof. Dr. Christoph Lütge
Submitted: 08.09.2023
Implementing AI Ethics Principles into Practice – Assessing Responsibility Distribution in AI-based Systems
Student: Julia Schöndienst
Master: Politics & Technology
Supervision: Ellen Hohma & Prof. Dr. Christoph Lütge
Submitted: 06.06.2023
AI-ducation: Can Standardized AI Labels Effectively Enhance Public Understanding of AI?
Project: Towards an Accountability Framework for AI Systems
Student: Nora Walkembach
Master: Robotics, Cognition, Intelligence
Supervision: Auxane Boch, Prof. Dr. Christoph Lütge, Prof. Dr. Georg Groh
Submitted: 15.05.2023
Evaluating the gender and cultural differences in perceptions and expectations of (relationships with) sex robots.
Student: Sarah Tabet
Master: Management and Technology
Supervision: Auxane Boch and Prof. Dr Christoph Lütge
Submitted: 15.04.2023
Risks of AI Systems for Sustainable Development Goals
Student: Jose Muriel
Master: Politics & Technology
Supervision: Auxane Boch & Prof. Dr. Christoph Lütge
Project: Towards an Accountability Framework for AI systems
Abstract:
Artificial Intelligence is a fascinating topic, which keeps evolving everyday. As more financial resources are invested within this field, more and more studies are taking place to analyze the benefits, opportunities and threats that AI can bring to our society. Within the field of sustainable development, the Sustainable Development Goals implemented by the United Nations have served as the guidelines to follow for all of the different stakeholders, including companies, states and organizations. With the growing influence of AI, especially in the private sector, it is crucial to understand the true impact this emerging technology can have on achieving the SDGs. Currently most of the literature has focused its attention on the beneficence of AI, and how it can actually be a major enabler of the SDGs. For this reason this research paper will try to look in a different direction, and it will analyze the risks that AI can bring to the achievement of the different Sustainable Development Goals. In order to accomplish this, the study will be conducted in the form of a systematic literature review, where the SDGs will be divided in two three thematic areas: 1. Social, which will include SDG: 1, 2, 3, 4, 5, 10,16, and 17; 2. Economic, which includes SDG: 7, 8, 9, 11 and 12; 3. Environmental, that includes SDG: 6, 13, 14 and 15 (D’Adamo et al, 2021). In order to have a proper understanding, this study will not only analyze the risks of AI at the individual SDG level, it will also consider all of its 169 targets.
Submitted: 25.02.2023
Risk and Responsibility Distribution within AI Systems: A Comparison between different Use Cases
Student: Marianne Kivikangas
Master: Management & Technology
Supervision: Ellen Hohma, Prof. Dr. Christoph Lütge
Industry Partner: Edge Case Research
Submitted: 27.01.2023
The impact of risks perception on trust and acceptability of Artificial Intelligence systems
Student: Yusuf Olajide Ahmed
Master: Management
Supervision : Auxane Boch & Prof. Dr. Christoph Lütge
Project: Towards an Accountability Framework for AI systems
Abstract:
The paper focuses on risks perception on trust and acceptability of Artificial Intelligence. While this technology gains public attention and broad utilization in various fields, the research aims to explore transparency and explainability of the tool. General information on this technology is clearly defined in the introduction section and names both positive and negative aspects of the instrument. The objectives of this work include the establishment of risk perception of AI users in terms of ethics, assessment of AI acceptability and trust scale, and evaluation of the trust and acceptability in the context of various features of this technology and corresponding risks. There are such research questions: Which risks in AI systems are perceived as acceptable by the user? Which categories and levels of risks impact users’ trust and acceptance of AI-Systems most?
The research is based on a sufficient literature review that explores AI as a technology of the modern world and its place in human societies. Specific attention is given to the role of this tool in business. The section of literature review incorporates surveys and findings from practical investigations of the technology. Importantly, the ideas on possible AI implementation risks are represented to mirror existing concerns on the business utilization of this technology. The research uses a quantitative research approach and random sampling technique. Data is collected from 250 respondents utilizing AI in business via questionnaires including 5-point Likert scale questions and close-ended questions. Gathered information is analyzed with the implementation of descriptive statistics to represent demographics, knowledge levels, and user experiences of participants. The study aims to construct cause and effect models and employs partial least squares-structured equation modeling.
Submitted: 25.01.2023
Data Governance in the Smart City: The Case of Kirchheim bei München
Student: Dominik Sawallisch
Bachelor: Management and Technology
Supervision: Dr. Catarina Fontes & Prof. Dr. Christoph Lütge
The ethical conceptualization of AI in smart city programmes
Student: Nicole Seimebua
Master: Science and Technology Studies
Supervision: Ellen Hohma, Ana Catarina Fontes, Georgia Samaras
Assessing Public Moral Acceptance of Elderly Care Robot’s Decision-Making through Cultural Lenses
Student: Mehdi Fekari
Master: Politics & Technology
Supervision: Auxane Boch & Prof. Dr. Christoph Lütge
Abstract:
The recent development of Social Robots has intensified the ethical discussions aiming at understating risks and opportunities these new robots introduce. This research proposal suggests to assess the public moral acceptance of Elderly Care Robot’s decision making based on cultural background, in the light of existing AI ethical frameworks and principles. More specifically, the proposal is based on the hypothesis that the perception of particular ethical principles, such as Autonomy and Accountability in the context of Elderly Care Robots, is strongly influenced by the cultural background. To assess this hypothesis, the proposal is to design a Moral Machine-Like platform to collect perspective of public opinion in distinct regions regarding targeted moral dilemma situations.
Cognitive Bias in Social Robot Design: a Theory of Mind Approach
Student: Amy Ndiaye Sow
Master: Management and Technology
Supervision: Auxane Boch & Prof. Dr. Christoph Lütge
Abstract:
The human exceptionalism paradigm is being challenged by our own species’ ingenuity through the development of sophisticated robots that resemble and mimic our appearance, behaviours and cognition. This phenomenon directly concurs with our image of uniqueness in comparison with other living beings. Therefore, it opens an ethical conversation about our own cognitive limitations and how we could be unintentionally transferring them to our technological creations, namely social robots. This thesis seeks to provide an insight into how human implicit cognitive biases can be transferred into social robots and the ethical implications of such mechanisms on our society.
Impacts of AI on Human Rights
Student : Immanuel Klein
Bachelor : Management and Technology
Supervision : Auxane Boch & Prof. Dr. Christoph Lütge
Project: Towards an Accountability Framework for AI systems
Abstract:
The bachelor’s thesis is going to be about which risks come from AI for human rights based on the Universal Declaration of Human Rights (United Nations General Assembly, 1948), with a focus on societal aspects and Europe. A systematic literature review will be conducted.
Houston, We Will Have a Mental Health Problem: An Ethical Analysis of AI-powered Mental Health Assistants for Astronauts during Long-duration Space Exploration
Student: Héloïse Miny
Master: Management and Technology
Supervision: Auxane Boch & Prof. Dr. Christoph Lütge
Abstract:
Artificial intelligence (AI) is one of the greatest challenges of the twenty-first century. It is progressing rapidly and AI-powered robots are now built to mimic and respond to human emotions. This new capability will be helpful when it comes to space exploration since the isolation in a space shift and the possibly long-duration of the trips might weaken the crew’s mental health. A solution proposed by Patel (2020) in the MIT Technology Review would be the introduction of an AI-powered robot in space shift to support the psychological well-being of astronauts while considering the ethical challenges of such a machine. For that purpose, a review of existing data on the impact of isolation on mental health, as well as current AI technologies and robots will be introduced. Finally, ethical guidelines will be proposed to ensure adequate development of these tools.
“Hey Alexa, how did you do this?” – Exploring digital literacy, explanation satisfaction, and cognitive trust related to Explainable Artificial Intelligence in the context of Intelligent Personal Voice Assistants through an empirical study of Amazon Alexa users.
Student: Florian Hörl
Master: Management and Technology
Supervision: Auxane Boch & Prof. Dr. Christoph Lütge
Abstract:
Artificial Intelligence (AI) holds promising upsides for various aspects of human life. Intelligent Personal Voice Assistants (IPVAs), including Amazon Alexa (Amazon, 2021a), can simplify the daily life of millions of users by giving them recommendations, making predictions, executing tasks and making decisions for them. Nevertheless, the rapid technological advancements in AI do not come without costs. Often, the inner workings of AI algorithms are invisible or unexplainable to all but the most expert observers. As a result, researchers raise the need for AI systems’ inner workings to be explainable and transparent to build public trust in, and comprehensive understanding of these technologies. The nascent field of research named Explainable Artificial Intelligence (XAI) aims to reach exactly this goal. While many researchers have already contributed to it, only very few studies to date have taken a human-centric approach towards XAI, resulting in a lack of understanding of the user side of the subject. Consequently, authors advocate the need for further research towards usability, practical applicability and efficacy on real users. This study followed the call for further research and aimed to shed light on user-centric XAI with a focus on IPVAs by means of conducting an empirical online questionnaire with Amazon Alexa (Amazon, 2021a) users. The results of the study showed that the relationship between user’s digital literacy and user’s explanation satisfaction portrays a complex picture: while digital literacy did not have an effect on the explanation satisfaction for high complexity explanations, it did positively correlate with the explanation satisfaction for low complexity explanations. Furthermore, the results showed that users’ high level of AI explanation satisfaction is a key driver for high levels of users’ cognitive trust in IPVAs.
Ethics and Data Protection in the Analysis of Process-Based Data in Healthcare – What are Reasonable Standards for Al-Based Process Mining Technology in Terms of Policy, Ethics, and Societal Aspects?
Student: Thiemo Grimme
Master: Management & Technology
Supervision: Ellen Hohma, Prof. Dr. Christoph Lütge
Industry Partner: Celonis
Abstract:
Existing research regarding the application of AI in healthcare has investigated ethical and legal issues mainly on the broader, theoretical level. However, little research has explored specific ethical guidelines for the application of AI in practice. The study at hand aimed to develop an application-based set of ethical guidelines for the implementation of AI-based process mining technology in the healthcare sector in a qualitative manner. The sample comprised fourteen experts in the fields of ethics, medicine, and technology implementation. A semi-structured interview guide was administered which comprised open questions regarding experts’ opinions and suggestions towards ethical and legal considerations for an application of AI in hospitals. Directed content analysis revealed two major findings: First, existing legal data protection standards that need to be met when implementing AI in German hospitals. Second, twelve specific guidelines were concluded aiming for an ethical and societal compliant application of process mining in practice. If adopted by hospitals, these guidelines set a common framework that will contribute to an application of AI compliant with widely agreed on ethical principles.
Submitted: 01.09.2021
Using Explainable AI to Better Understand Credit Risk Assessment Models Based on Natural Language Processing
Student: Matthias Renner
Master: Information Systems
Supervision: Ellen Hohma, Dr. Oliver Pfaffel (Munich Re), Prof. Dr. Christoph Lütge, Prof. Dr. Georg Groh
Industry Partner: Munich RE
Abstract:
Deep learning models have proved to be powerful and accurate on numerous tasks, however they can be difficult to comprehend. New regulations, such as the EU AI Act, are pushing for greater transparency of models deployed in production. In order to ensure that black- box models behave fairly, companies can use explainable AI (XAI) methods to make them more interpretable. Furthermore, explainable AI methods can also aid model developers in debugging or improving their models. As the majority of research on XAI focuses on the development of such methods, the adaptation of XAI to a company level has hardly been explored yet. This thesis attempts to help close this gap by integrating an explainable AI prototype into a productive environment. We used Munich Re’s natural language processing framework in order to identify typical challenges that might appear during such an integration process. We designed architectural structures that could help with the integration and maintenance of XAI systems. In addition, we conducted a semi-structured interview with domain experts from Munich Re to identify requirements from AI practitioners toward XAI. Among the most important requirements mentioned was interpretability, which many approaches do not meet. We found data scientists to be intimidated by the number of possible XAI methods and effort needed to incorporate them into their existing code. Instead, they often decide against using them as a result of their complexity. Based on our findings, we propose that an overview of XAI methods be created in the context of applicable models and tasks, so that newcomers have an easier time finding the methods that are relevant to them. Furthermore, we suggest creating a new role within a company that will be responsible for the integration and maintenance of the XAI applications that are being used by the company. Consequently, data scientists would be able to reap the benefits of XAI methods without having to put in unnecessary time and effort in order to do so.
Submitted: 16.05.2022