On February the 14th, the IEAI had the pleasure of co-hosting an official side event to the Munich Security Conference 2025 with the Amerikahaus in Munich. The panel discussion was entitled “Automating Human Security: Rethinking the Role of AI in Conflict for the Protection of Civilians” and was moderated by Dr. Caitlin Corrigan, Executive Director of the TUM IEAI. The panelists for the discussion were Prof. Christoph Lütge (Director, TUM IEAI), Hichem Khadhraoui (Executive Director, Center for Civilians in Conflict), Yasmin Al-Douri (Co-Founder, Responsible Technology Hub), Prof. Eirini Ntoutsi (Professor, University of the Bundeswehr Munich) and Vilas Dhar (President, Patrick J. McGovern Foundation). 

Dr. Corrigan kicked off the discussion highlighting the human toll of conflict in recent years, citing the over 117 million people worldwide who were displaced due conflict and crises and the over 470 million children who lived in conflict zones in 2023 alone. While conflict is becoming more complex and its interaction with technology (including AI) is deepening, the dual use nature of many of these technologies means that the potential for opportunities to protect civilians must be also accompanied by concerns about misuse. 

Given the high-risk setting in which tools are being deployed, the discussion started with an observation from Mr. Khadhraoui on what his group has observed in terms of the use of AI-enabled tools in the context of the Ukraine and the daily interaction civilians have with tools such as AI-enabled drones. 

Picking up the conversation with a focus on the technical risks of AI in conflict settings, Prof. Ntoutsi delved into the origins of the underlying risks. She pointed out that implicit algorithms were the principal reason why AI decision-making is difficult to understand. By nature, data are biased to the extent that outdated information and AI-generated information are used in training processes. Relevant to the fact that data is not the perfect reflection of contexts (historical, social, demographic, etc.), it often has to be decided which optimization goal should be prioritized.

Ms. Al-Douri then walked the audience through some of the aspects of how data collection and the spread of information on social media can impact conflict settings through polarization and misinformation. Much of the issue is around how data is processed, labeled and applied for behavioral predictions. With public surveillance increasing the ease of mass data collection, the issue of responsible data collection and use is a key topic.

Meanwhile, Prof. Lütge offered his perspective that we can look to the ethical frameworks adopted in other AI applications, such as autonomous driving, to think about approaches to ethical use of AI in conflict. In this light, he reminded the audience that AI technologies can only be trusted when users are well-informed about their safety.

As the discussion continued, it was noted that even given the frameworks that currently exist, the use of AI in conflict settings has high risk with often life or death implications. Moreover, it often is the case that those who end up interacting with AI-enabled systems lack a real choice about having to do so. This, and the dual use nature of many AI systems in conflict settings, adds a level of complexity to the discussion about ethical and responsible use. 

Building off of the discussion of how to govern and assess risks in conflict areas, Mr. Dhar brought in insights from his extensive work with the global community in terms of what the major concerns and challenges are at this level. This included the observation that nation states are no longer always the dominant actors in conflict settings, but now also individuals and corporations must be considered in terms of AI use. As he noted, governance often happens best when you anticipate the future and build adequate guardrails, but this is not the case in the context of rapidly moving violent conflicts. He suggested that this is not necessarily a time to build new frameworks, but rather is a time to be in the field and define military use and morals.

The evening concluded with an insightful Q&A session between the audience and the panelists. The IEAI thanks Amerikahaus for co-hosting this event, the speakers and audience for their insights, and the Munich Security Conference for supporting this discussion and making this an impactful event on AI and human security.

Munich Security Conference 2025 Side Event Panel Side Shot
  • Munich Security Conference 2025 Side Event Panel
  • Munich Security Conference 2025 Side Event Panel
  • Munich Security Conference 2025 Side Event Panel
  • Munich Security Conference 2025 Side Event Panel