Student Thesis Projects

Cognitive Bias in Social Robot Design: a Theory of Mind Approach

Student: Amy Ndiaye Sow

Master: Management and Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Abstract:

The human exceptionalism paradigm is being challenged by our own species’ ingenuity through the development of sophisticated robots that resemble and mimic our appearance, behaviours and cognition. This phenomenon directly concurs with our image of uniqueness in comparison with other living beings. Therefore, it opens an ethical conversation about our own cognitive limitations and how we could be unintentionally transferring them to our technological creations, namely social robots. This thesis seeks to provide an insight into how human implicit cognitive biases can be transferred into social robots and the ethical implications of such mechanisms on our society.

Assessing Public Moral Acceptance of Elderly Care Robot’s Decision-Making through Cultural Lenses

Student: Mehdi Fekari

Master: Politics & Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Abstract:

The recent development of Social Robots has intensified the ethical discussions aiming at understating risks and opportunities these new robots introduce. This research proposal suggests to assess the public moral acceptance of Elderly Care Robot’s decision making based on cultural background, in the light of existing AI ethical frameworks and principles. More specifically, the proposal is based on the hypothesis that the perception of particular ethical principles, such as Autonomy and Accountability in the context of Elderly Care Robots, is strongly influenced by the cultural background. To assess this hypothesis, the proposal is to design a Moral Machine-Like platform to collect perspective of public opinion in distinct regions regarding targeted moral dilemma situations.

Impact of COVID-19 pandemic on education in Senegal and proposed AI solutions

Student: Paloma Laye

Master: Politics & Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Abstract:

With the COVID-19 pandemic, governments worldwide were forced to take significant measures to reduce the spread of the virus and protect the well-being of their population by installing nationwide lockdowns. Consequently, educational institutions were closed down, and governments faced the challenge of managing the academic continuance of students remotely, supporting teachers and school personnel, and protecting their health and well-being (UNESCO, 2020). In response to this, research was conducted on how Artificial Intelligence in Education (AIEd) can be used to continue proper and quality education. These new technological developments can provide students with a more customized and flexible educational experience, and teachers can gain insight into their student’s learning patterns and benefit from task automation (Yu et al., 2017). Societal, legal, and moral values are crucial elements in all stages of the development of AI, such as the design, construction, deployment, and evaluation (Dignum, 2018). Therefore, studying ethics in the expansion of AI is essential to establish a sense of transparency and trust and ensure that human and individual rights are respected. It is also necessary to evaluate the context in which this technology is created, the resources available, the values of the different stakeholders, and what ethical considerations must be taken into account. This thesis will focus on Senegal, how the pandemic has impacted its educational system, what AIEd devices have been implemented, improvements, and possible recommendations while focusing on ethics.

The ethical conceptualization of AI in smart city programmes

Student: Nicole Seimebua

Master: Science and Technology Studies

Supervision: Ellen Hohma, Ana Catarina Fontes, Georgia Samaras

Submission Date: 21.09.2022

Risk and Responsibility Distribution within AI Systems: A Comparison between different Use Cases

Student: Marianne Kivikangas

Master: Management & Technology

Supervision: Ellen Hohma, Prof. Dr. Christoph Lütge

Industry Partner: Edge Case Research

Houston, We Will Have a Mental Health Problem: An Ethical Analysis of AI-powered Mental Health Assistants for Astronauts during Long-duration Space Exploration

Student: Héloïse Miny

Master: Management and Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Abstract:

Artificial intelligence (AI) is one of the greatest challenges of the twenty-first century. It is progressing rapidly and AI-powered robots are now built to mimic and respond to human emotions. This new capability will be helpful when it comes to space exploration since the isolation in a space shift and the possibly long-duration of the trips might weaken the crew’s mental health. A solution proposed by Patel (2020) in the MIT Technology Review would be the introduction of an AI-powered robot in space shift to support the psychological well-being of astronauts while considering the ethical challenges of such a machine. For that purpose, a review of existing data on the impact of isolation on mental health, as well as current AI technologies and robots will be introduced. Finally, ethical guidelines will be proposed to ensure adequate development of these tools.

“Hey Alexa, how did you do this?” – Exploring digital literacy, explanation satisfaction, and cognitive trust related to Explainable Artificial Intelligence in the context of Intelligent Personal Voice Assistants through an empirical study of Amazon Alexa users.

Student: Florian Hörl

Master: Management and Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Abstract:

Artificial Intelligence (AI) holds promising upsides for various aspects of human life. Intelligent Personal Voice Assistants (IPVAs), including Amazon Alexa (Amazon, 2021a), can simplify the daily life of millions of users by giving them recommendations, making predictions, executing tasks and making decisions for them. Nevertheless, the rapid technological advancements in AI do not come without costs. Often, the inner workings of AI algorithms are invisible or unexplainable to all but the most expert observers. As a result, researchers raise the need for AI systems’ inner workings to be explainable and transparent to build public trust in, and comprehensive understanding of these technologies. The nascent field of research named Explainable Artificial Intelligence (XAI) aims to reach exactly this goal. While many researchers have already contributed to it, only very few studies to date have taken a human-centric approach towards XAI, resulting in a lack of understanding of the user side of the subject. Consequently, authors advocate the need for further research towards usability, practical applicability and efficacy on real users. This study followed the call for further research and aimed to shed light on user-centric XAI with a focus on IPVAs by means of conducting an empirical online questionnaire with Amazon Alexa (Amazon, 2021a) users. The results of the study showed that the relationship between user’s digital literacy and user’s explanation satisfaction portrays a complex picture: while digital literacy did not have an effect on the explanation satisfaction for high complexity explanations, it did positively correlate with the explanation satisfaction for low complexity explanations. Furthermore, the results showed that users’ high level of AI explanation satisfaction is a key driver for high levels of users’ cognitive trust in IPVAs.

Ethics and Data Protection in the Analysis of Process-Based Data in Healthcare – What are Reasonable Standards for Al-Based Process Mining Technology in Terms of Policy, Ethics, and Societal Aspects?

Student: Thiemo Grimme

Master: Management & Technology

Supervision: Ellen Hohma, Prof. Dr. Christoph Lütge

Industry Partner: Celonis

Abstract:

Existing research regarding the application of AI in healthcare has investigated ethical and legal issues mainly on the broader, theoretical level. However, little research has explored specific ethical guidelines for the application of AI in practice. The study at hand aimed to develop an application-based set of ethical guidelines for the implementation of AI-based process mining technology in the healthcare sector in a qualitative manner. The sample comprised fourteen experts in the fields of ethics, medicine, and technology implementation. A semi-structured interview guide was administered which comprised open questions regarding experts’ opinions and suggestions towards ethical and legal considerations for an application of AI in hospitals. Directed content analysis revealed two major findings: First, existing legal data protection standards that need to be met when implementing AI in German hospitals. Second, twelve specific guidelines were concluded aiming for an ethical and societal compliant application of process mining in practice. If adopted by hospitals, these guidelines set a common framework that will contribute to an application of AI compliant with widely agreed on ethical principles.

Submitted: 01.09.2021

Using Explainable AI to Better Understand Credit Risk Assessment Models Based on Natural Language Processing

Student: Matthias Renner

Master: Information Systems

Supervision: Ellen Hohma, Dr. Oliver Pfaffel (Munich Re), Prof. Dr. Christoph Lütge, Prof. Dr. Georg Groh

Industry Partner: Munich RE

Abstract:

Deep learning models have proved to be powerful and accurate on numerous tasks, however they can be difficult to comprehend. New regulations, such as the EU AI Act, are pushing for greater transparency of models deployed in production. In order to ensure that black- box models behave fairly, companies can use explainable AI (XAI) methods to make them more interpretable. Furthermore, explainable AI methods can also aid model developers in debugging or improving their models. As the majority of research on XAI focuses on the development of such methods, the adaptation of XAI to a company level has hardly been explored yet. This thesis attempts to help close this gap by integrating an explainable AI prototype into a productive environment. We used Munich Re’s natural language processing framework in order to identify typical challenges that might appear during such an integration process. We designed architectural structures that could help with the integration and maintenance of XAI systems. In addition, we conducted a semi-structured interview with domain experts from Munich Re to identify requirements from AI practitioners toward XAI. Among the most important requirements mentioned was interpretability, which many approaches do not meet. We found data scientists to be intimidated by the number of possible XAI methods and effort needed to incorporate them into their existing code. Instead, they often decide against using them as a result of their complexity. Based on our findings, we propose that an overview of XAI methods be created in the context of applicable models and tasks, so that newcomers have an easier time finding the methods that are relevant to them. Furthermore, we suggest creating a new role within a company that will be responsible for the integration and maintenance of the XAI applications that are being used by the company. Consequently, data scientists would be able to reap the benefits of XAI methods without having to put in unnecessary time and effort in order to do so.

Submitted: 16.05.2022