Student Thesis Projects

Impacts of AI on Human Rights

Student : Immanuel Klein

Bachelor : Management and Technology

Supervision : Auxane Boch & Prof. Dr. Christoph Lütge

Project: Towards an Accountability Framework for AI systems

Abstract:

The bachelor’s thesis is going to be about which risks come from AI for human rights based on the Universal Declaration of Human Rights (United Nations General Assembly, 1948), with a focus on societal aspects and Europe. A systematic literature review will be conducted.

The impact of risks perception on trust and acceptability of Artificial Intelligence systems

Student: Yusuf Olajide Ahmed

Master: Management

Supervision : Auxane Boch & Prof. Dr. Christoph Lütge

Project: Towards an Accountability Framework for AI systems

Abstract:

The paper focuses on risks perception on trust and acceptability of Artificial Intelligence. While this technology gains public attention and broad utilization in various fields, the research aims to explore transparency and explainability of the tool. General information on this technology is clearly defined in the introduction section and names both positive and negative aspects of the instrument. The objectives of this work include the establishment of risk perception of AI users in terms of ethics, assessment of AI acceptability and trust scale, and evaluation of the trust and acceptability in the context of various features of this technology and corresponding risks. There are such research questions: Which risks in AI systems are perceived as acceptable by the user? Which categories and levels of risks impact users’ trust and acceptance of AI-Systems most?

The research is based on a sufficient literature review that explores AI as a technology of the modern world and its place in human societies. Specific attention is given to the role of this tool in business. The section of literature review incorporates surveys and findings from practical investigations of the technology. Importantly, the ideas on possible AI implementation risks are represented to mirror existing concerns on the business utilization of this technology. The research uses a quantitative research approach and random sampling technique. Data is collected from 250 respondents utilizing AI in business via questionnaires including 5-point Likert scale questions and close-ended questions. Gathered information is analyzed with the implementation of descriptive statistics to represent demographics, knowledge levels, and user experiences of participants. The study aims to construct cause and effect models and employs partial least squares-structured equation modeling.

Risks of AI Systems for Sustainable Development Goals

Student: Jose Muriel

Master: Politics & Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Project: Towards an Accountability Framework for AI systems

Abstract:

Artificial Intelligence is a fascinating topic, which keeps evolving everyday. As more financial resources are invested within this field, more and more studies are taking place to analyze the benefits, opportunities and threats that AI can bring to our society. Within the field of sustainable development, the Sustainable Development Goals implemented by the United Nations have served as the guidelines to follow for all of the different stakeholders, including companies, states and organizations. With the growing influence of AI, especially in the private sector, it is crucial to understand the true impact this emerging technology can have on achieving the SDGs. Currently most of the literature has focused its attention on the beneficence of AI, and how it can actually be a major enabler of the SDGs. For this reason this research paper will try to look in a different direction, and it will analyze the risks that AI can bring to the achievement of the different Sustainable Development Goals. In order to accomplish this, the study will be conducted in the form of a systematic literature review, where the SDGs will be divided in two three thematic areas: 1. Social, which will include SDG: 1, 2, 3, 4, 5, 10,16, and 17; 2. Economic, which includes SDG: 7, 8, 9, 11 and 12; 3. Environmental, that includes SDG: 6, 13, 14 and 15 (D’Adamo et al, 2021). In order to have a proper understanding, this study will not only analyze the risks of AI at the individual SDG level, it will also consider all of its 169 targets.

Cognitive Bias in Social Robot Design: a Theory of Mind Approach

Student: Amy Ndiaye Sow

Master: Management and Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Abstract:

The human exceptionalism paradigm is being challenged by our own species’ ingenuity through the development of sophisticated robots that resemble and mimic our appearance, behaviours and cognition. This phenomenon directly concurs with our image of uniqueness in comparison with other living beings. Therefore, it opens an ethical conversation about our own cognitive limitations and how we could be unintentionally transferring them to our technological creations, namely social robots. This thesis seeks to provide an insight into how human implicit cognitive biases can be transferred into social robots and the ethical implications of such mechanisms on our society.

Assessing Public Moral Acceptance of Elderly Care Robot’s Decision-Making through Cultural Lenses

Student: Mehdi Fekari

Master: Politics & Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Abstract:

The recent development of Social Robots has intensified the ethical discussions aiming at understating risks and opportunities these new robots introduce. This research proposal suggests to assess the public moral acceptance of Elderly Care Robot’s decision making based on cultural background, in the light of existing AI ethical frameworks and principles. More specifically, the proposal is based on the hypothesis that the perception of particular ethical principles, such as Autonomy and Accountability in the context of Elderly Care Robots, is strongly influenced by the cultural background. To assess this hypothesis, the proposal is to design a Moral Machine-Like platform to collect perspective of public opinion in distinct regions regarding targeted moral dilemma situations.

Impact of COVID-19 pandemic on education in Senegal and proposed AI solutions

Student: Paloma Laye

Master: Politics & Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Abstract:

With the COVID-19 pandemic, governments worldwide were forced to take significant measures to reduce the spread of the virus and protect the well-being of their population by installing nationwide lockdowns. Consequently, educational institutions were closed down, and governments faced the challenge of managing the academic continuance of students remotely, supporting teachers and school personnel, and protecting their health and well-being (UNESCO, 2020). In response to this, research was conducted on how Artificial Intelligence in Education (AIEd) can be used to continue proper and quality education. These new technological developments can provide students with a more customized and flexible educational experience, and teachers can gain insight into their student’s learning patterns and benefit from task automation (Yu et al., 2017). Societal, legal, and moral values are crucial elements in all stages of the development of AI, such as the design, construction, deployment, and evaluation (Dignum, 2018). Therefore, studying ethics in the expansion of AI is essential to establish a sense of transparency and trust and ensure that human and individual rights are respected. It is also necessary to evaluate the context in which this technology is created, the resources available, the values of the different stakeholders, and what ethical considerations must be taken into account. This thesis will focus on Senegal, how the pandemic has impacted its educational system, what AIEd devices have been implemented, improvements, and possible recommendations while focusing on ethics.

The ethical conceptualization of AI in smart city programmes

Student: Nicole Seimebua

Master: Science and Technology Studies

Supervision: Ellen Hohma, Ana Catarina Fontes, Georgia Samaras

Submission Date: 21.09.2022

Risk and Responsibility Distribution within AI Systems: A Comparison between different Use Cases

Student: Marianne Kivikangas

Master: Management & Technology

Supervision: Ellen Hohma, Prof. Dr. Christoph Lütge

Industry Partner: Edge Case Research

Houston, We Will Have a Mental Health Problem: An Ethical Analysis of AI-powered Mental Health Assistants for Astronauts during Long-duration Space Exploration

Student: Héloïse Miny

Master: Management and Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Abstract:

Artificial intelligence (AI) is one of the greatest challenges of the twenty-first century. It is progressing rapidly and AI-powered robots are now built to mimic and respond to human emotions. This new capability will be helpful when it comes to space exploration since the isolation in a space shift and the possibly long-duration of the trips might weaken the crew’s mental health. A solution proposed by Patel (2020) in the MIT Technology Review would be the introduction of an AI-powered robot in space shift to support the psychological well-being of astronauts while considering the ethical challenges of such a machine. For that purpose, a review of existing data on the impact of isolation on mental health, as well as current AI technologies and robots will be introduced. Finally, ethical guidelines will be proposed to ensure adequate development of these tools.

“Hey Alexa, how did you do this?” – Exploring digital literacy, explanation satisfaction, and cognitive trust related to Explainable Artificial Intelligence in the context of Intelligent Personal Voice Assistants through an empirical study of Amazon Alexa users.

Student: Florian Hörl

Master: Management and Technology

Supervision: Auxane Boch & Prof. Dr. Christoph Lütge

Abstract:

Artificial Intelligence (AI) holds promising upsides for various aspects of human life. Intelligent Personal Voice Assistants (IPVAs), including Amazon Alexa (Amazon, 2021a), can simplify the daily life of millions of users by giving them recommendations, making predictions, executing tasks and making decisions for them. Nevertheless, the rapid technological advancements in AI do not come without costs. Often, the inner workings of AI algorithms are invisible or unexplainable to all but the most expert observers. As a result, researchers raise the need for AI systems’ inner workings to be explainable and transparent to build public trust in, and comprehensive understanding of these technologies. The nascent field of research named Explainable Artificial Intelligence (XAI) aims to reach exactly this goal. While many researchers have already contributed to it, only very few studies to date have taken a human-centric approach towards XAI, resulting in a lack of understanding of the user side of the subject. Consequently, authors advocate the need for further research towards usability, practical applicability and efficacy on real users. This study followed the call for further research and aimed to shed light on user-centric XAI with a focus on IPVAs by means of conducting an empirical online questionnaire with Amazon Alexa (Amazon, 2021a) users. The results of the study showed that the relationship between user’s digital literacy and user’s explanation satisfaction portrays a complex picture: while digital literacy did not have an effect on the explanation satisfaction for high complexity explanations, it did positively correlate with the explanation satisfaction for low complexity explanations. Furthermore, the results showed that users’ high level of AI explanation satisfaction is a key driver for high levels of users’ cognitive trust in IPVAs.

Ethics and Data Protection in the Analysis of Process-Based Data in Healthcare – What are Reasonable Standards for Al-Based Process Mining Technology in Terms of Policy, Ethics, and Societal Aspects?

Student: Thiemo Grimme

Master: Management & Technology

Supervision: Ellen Hohma, Prof. Dr. Christoph Lütge

Industry Partner: Celonis

Abstract:

Existing research regarding the application of AI in healthcare has investigated ethical and legal issues mainly on the broader, theoretical level. However, little research has explored specific ethical guidelines for the application of AI in practice. The study at hand aimed to develop an application-based set of ethical guidelines for the implementation of AI-based process mining technology in the healthcare sector in a qualitative manner. The sample comprised fourteen experts in the fields of ethics, medicine, and technology implementation. A semi-structured interview guide was administered which comprised open questions regarding experts’ opinions and suggestions towards ethical and legal considerations for an application of AI in hospitals. Directed content analysis revealed two major findings: First, existing legal data protection standards that need to be met when implementing AI in German hospitals. Second, twelve specific guidelines were concluded aiming for an ethical and societal compliant application of process mining in practice. If adopted by hospitals, these guidelines set a common framework that will contribute to an application of AI compliant with widely agreed on ethical principles.

Submitted: 01.09.2021

Using Explainable AI to Better Understand Credit Risk Assessment Models Based on Natural Language Processing

Student: Matthias Renner

Master: Information Systems

Supervision: Ellen Hohma, Dr. Oliver Pfaffel (Munich Re), Prof. Dr. Christoph Lütge, Prof. Dr. Georg Groh

Industry Partner: Munich RE

Abstract:

Deep learning models have proved to be powerful and accurate on numerous tasks, however they can be difficult to comprehend. New regulations, such as the EU AI Act, are pushing for greater transparency of models deployed in production. In order to ensure that black- box models behave fairly, companies can use explainable AI (XAI) methods to make them more interpretable. Furthermore, explainable AI methods can also aid model developers in debugging or improving their models. As the majority of research on XAI focuses on the development of such methods, the adaptation of XAI to a company level has hardly been explored yet. This thesis attempts to help close this gap by integrating an explainable AI prototype into a productive environment. We used Munich Re’s natural language processing framework in order to identify typical challenges that might appear during such an integration process. We designed architectural structures that could help with the integration and maintenance of XAI systems. In addition, we conducted a semi-structured interview with domain experts from Munich Re to identify requirements from AI practitioners toward XAI. Among the most important requirements mentioned was interpretability, which many approaches do not meet. We found data scientists to be intimidated by the number of possible XAI methods and effort needed to incorporate them into their existing code. Instead, they often decide against using them as a result of their complexity. Based on our findings, we propose that an overview of XAI methods be created in the context of applicable models and tasks, so that newcomers have an easier time finding the methods that are relevant to them. Furthermore, we suggest creating a new role within a company that will be responsible for the integration and maintenance of the XAI applications that are being used by the company. Consequently, data scientists would be able to reap the benefits of XAI methods without having to put in unnecessary time and effort in order to do so.

Submitted: 16.05.2022