On November 26th 2024, the IBM Innovation Studio and the Institute for Ethics in Artificial Intelligence (IEAI) co-hosted an event titled “Generative AI, Human Decision-Making and Responsible Design” at the IBM Watson Center in Munich.

The event kicked off with a compelling introduction by Rolf Löwisch, IBM’s Director of Data & AI, who underscored the company’s commitment to ethical AI. Löwisch spotlighted IBM’s efforts to embed ethical principles across diverse AI initiatives, citing impactful collaborations in wildlife conservation, healthcare and energy equity and stressed the crucial link between trust and innovation.

Ethics is about doing the right thing, and trust is the key to driving adoption.
Rolf Löwisch

Dr. Franziska Poszler, Project Lead of MoralPLai, presented this innovative IEAI project, which investigates the influence of large language models (LLMs) on human decision-making. She emphasized the importance of engaging the public in discussions surrounding the responsible development and use of pertinent AI applications and utilizing creative communication methods to do so.

Given the profound societal impact of LLMs,
it is vital to assess their risks, inform the public about their implications
and actively involve the users in the discussions around the design of these applications.”
Dr. Franziska Poszler

Dr. Poszler also moderated a panel of speakers, including Prof. Christoph Lütge (Director of the IEAI, TUM School of Social Sciences and Technology), Dr. Markus Walk (Senior Advisory Architect for GenAI, IBM), Prof. Johannes Betz (Professorship of Autonomous Vehicle Systems, TUM School of Engineering and Design), and Prof. Nils Köbis (Chair Human Understanding of Algorithms and Machines, University of Duisburg-Essen). This selection of speakers aimed to shed light on the topic from philosophical, psychological, technical, and industry perspectives. Each speaker provided insights into the event’s key themes. More specifically:

Prof. Lütge provided an interdisciplinary perspective, addressing the broader ethical challenges of AI. He highlighted the necessity of “guardrails” to guide responsible AI development, pointing out that many issues are not purely technical but demand philosophical and societal insights; thus, a collaboration across diverse fields is necessary.

“Major problems in developing AI are not really technical problems.”
Prof. Christoph Lütge

Dr. Walk from IBM explored the transformative dynamics of decision-making in AI systems. His presentation centered on the critical question, “Who drives the decision in an AI system—the human or the machine?” He highlighted IBM’s approach to designing systems that prioritize trust and accountability, particularly in scenarios where AI autonomously takes action.

 “With GenAI (esp. Chatbots) we are used to being the trigger of actions. What if we aren’t the only trigger anymore? Will we be able to accept tasks that come from a machine?”
Dr. Markus Walk

Prof. Betz emphasized the importance of human adaptability and morality as key benchmarks for AI development. Drawing on his work with autonomous vehicles, he stressed the necessity for machines to replicate human-like qualities—such as learning and goal-setting—in order to navigate complex moral decisions effectively.

“Machines lack morals and values—qualities we need to embed into future systems.”
Prof. Johannes Betz

Prof. Köbis addressed the behavioral implications of AI, presenting experimental research on how AI systems can influence human ethical behavior. He revealed that AI, when acting as an advisor, can subtly promote unethical actions under certain conditions. Köbis stressed the need for further behavioral AI safety research.

“We must understand how these systems influence human decisions
and leverage this knowledge responsibly.”
Prof. Nils Köbis

The presentations set the stage for a thought-provoking panel discussion, exploring the themes in greater depth.

The discussion delved into the design of AI and its societal implications, addressing ethical decisions and the opportunities and challenges across various sectors. The audience was also given the opportunity to participate actively, sharing their thoughts and posing questions to the speakers. Here are some highlights:

Focusing on areas where people commonly seek advice from AI systems, Prof. Köbis explored how AI tools have begun to form personal relationships, extending beyond romantic contexts. He also raised concerns about the risks of developing emotional bonds with non-human entities like AI chatbots. Prof. Betz added that AI could play a significant role in education. At the same time, Prof. Lütge noted that children are increasingly turning to AI chatbots with sensitive questions rather than their parents.

Regarding language-based AI chatbots and their ethical considerations, Prof. Köbis and Dr. Walk explained that these systems’ ease of access and low interface thresholds greatly influence our ethical decision-making.

To strike a better balance between AI’s role and human judgment in critical sectors such as healthcare, Prof. Lütge stressed the need for interdisciplinary collaboration. He emphasized that human input is essential and called for a non-technical approach to AI development.

Dr. Walk further discussed IBM’s approach, noting that the company has built its own systems to address data-related challenges.

When addressing technical and legal challenges, Prof. Betz and Prof. Lütge pointed out the controversial nature of ethical issues and the limitations of current legal frameworks. Prof. Lütge specifically highlighted how the AI Act hinders the deployment of new technologies in Europe due to unclear regulations, which is particularly problematic for the startup ecosystem.

Looking to the future, Prof. Betz emphasized the importance of industry collaboration in promoting responsible AI design. He noted that academia can play a key role in researching the underlying mechanisms of AI models. Meanwhile, Prof. Köbis noted that he and his team have concentrated on replicating large language models, as companies tend to keep much of the information about them proprietary.

The IEAI extends its thanks to the IBM team for their support and to the speakers and participants who made the event a success. The event focused on human decision-making and the responsible design of generative AI, which aligns with the objectives of the MoralPLai Project. It provided a valuable opportunity for professionals from industry, academia, and AI system users to engage in thoughtful discussions on the latest advancements and explore the ethical implications of AI.

For more information on the MoralPLai project, click here.