Loading Events

Human security challenges are increasingly intertwined, complex and dire. Actors in the security space are often either plagued by a lack of timely information or are confronted with too much information about too many crises to be able to react in an effective or logical way.

At the same time, humanitarian, military and government organizations are working with scarce resources in terms of time, manpower and finances. They are confronted with multiple simultaneous violent conflicts or crises where civilians are often hard to distinguish or access. 

AI-enabled tools, and the way they can handle and interpret vast amounts of diverse information quickly, are increasingly used in modern conflicts, and are of interest to parties focused on improving human security and civilian protection in conflict settings. However, the ways in which AI can help with improving human security around the world, particularly as it relates to the protection of civilians in conflict and crisis scenarios, is still unclear. There is a need for inclusive discussion on the topic and for the development of guiding frameworks for action and use. 

Where can AI tools increase the capacity for, or accuracy of, decision making in conflict scenarios? Where might they exacerbate problems? What implications are there for policymakers, military organizations and NGOs in conflict zones? How does the dual-use and high-risk nature of many AI tools complicate ethical and political decision making? How do we support a democratic and inclusive approach to enabling the use of AI to promote human security and civilian protection in conflict scenarios, rather than placing this goal on as an afterthought?

Caitlin Corrigan

Caitlin Corrigan is the Executive Director of the Institute for Ethics in Artificial Intelligence at the Technical University of Munich. In this role, she has overseen the research agenda of the IEAI since the institute opened in 2019. Corrigan holds a PhD in Public and International Affairs from the University of Pittsburgh. Her research interests include good governance, sustainable development and corporate social responsibility, particularly in developing country settings. To this end, she co-founded the Responsible AI Network – Africa. She has over ten years of experience in designing and implementing research and data collection, developing and managing research projects and coordinating funding proposals. Corrigan has published several articles in academic journals and is the editor of the IEAI Research Brief Series. She has worked as a consultant for local and international NGOs though her organization the Research Group for Sustainable Impacts (RG-SI).

Christoph Lütge

Christoph Lütge is Full Professor of Business Ethics at Technical University of Munich (TUM) and the Director of the TUM Institute for Ethics in Artificial Intelligence (IEAI). He is Distinguished Visiting Professor of Tokyo University and has held further visiting positions at Harvard (Berkman Klein Center), Taipei, Kyoto and others. Lütge has a background both in philosophy as well as information studies, having taken his PhD at the Technical University of Braunschweig in 1999 and his habilitation at the University of Munich (LMU) in 2005. In 2007, he was awarded a Heisenberg Fellowship by the German Research Foundation. His books were published by Oxford University Press, Palgrave Macmillan, Edward Elgar, Springer and others.

Hichem Khadhraoui

Hichem Khadhraoui is the Executive Director for the Center for Civilians in Conflict (CIVIC). Khadhraoui spent the last two decades working in the field of protection of civilians, including in Senior Manager positions and as the Head of Protection for Near and Middle East at the International Committee of Red Cross (ICRC) and most recently as the Director of Programmes and Field Operations at Geneva Call. He has extensive field experience with a presence in conflict-affected places including Somalia, Libya, Iraq, Afghanistan, Chad and Yemen. Khadhraoui holds Master’s degrees in International Humanitarian Law and Human Rights from the University of the Western Cape in South Africa and in International Law and European Law from the University Paul Cezanne in France, underscoring his expertise in international law, which is critical to the protection of civilians.

Yasmin Al-Douri

Yasmin Al-Douri is a founder, tech ethics expert and Senior Landecker Democracy Fellow. As Co-Director of the Responsible Technology Hub, Al-Douri was named in the Forbes 30 under 30 list and also worked on Responsible AI for various tech companies including Microsoft and Infineon. She was also named part of the 27 Talents by Business Insider and one of the world’s 50 Young Global Changemakers. She recently received the Zeiss Woman Award for Digital Entrepreneurship.

Al-Douri has also worked as Conflict Researcher and Head of Region for WANA at the Heidelberg Institute for International Conflict Research, gathered experience with the GIZ and the German Foreign Ministry and holds a degree in Political Sciences and Psychology from the University of Heidelberg. As a recent Politics & Technology graduate from the Technical University of Munich, Yasmin is also now pursuing a PhD in AI Governance and Social Media Polarization.

Eirini Ntoutsi

Eirini Ntoutsi is a Full Professor for Open Source Intelligence at the University of the Bundeswehr Munich (UniBwM) and the Research Institute CODE. Previously, she was a Full Professor of Artificial Intelligence at the Free University of Berlin and an Associate Professor of Intelligent Systems at Leibniz University Hannover. Ntoutsi also held a postdoctoral position at LMU Munich. She obtained her Ph.D. from the University of Piraeus, Athens and holds an M.Sc. and a Diploma in Computer Engineering and Informatics from the University of Patras, Greece.

Her research focuses on Artificial Intelligence (AI) and Machine Learning (ML), particularly adaptive learning and responsible AI. Ntoutsi leads major international research projects on algorithmic bias, discrimination mitigation and ethical AI. She has received multiple awards, including the prestigious Humboldt Fellowship, and actively contributes to the AI research community, most recently serving as PC co-chair for ECML PKDD 2024, Europe’s leading machine learning and data mining conference.

Vilas S. Dhar

Vilas Dhar is a global expert on artificial intelligence (AI) policy and a champion for equity in a tech-driven world. He serves as President and Trustee of the Patrick J. McGovern Foundation, a $1.5 billion philanthropy advancing AI and data solutions for a sustainable and equitable future.

Appointed by UN Secretary-General António Guterres to the High-Level Advisory Body on AI, Dhar is also the U.S. Government Nominated Expert to the Global Partnership on AI. He serves on the OECD Expert Working Group on AI Futures, the Global Future Council on AI at the World Economic Forum, and Stanford’s Advisory Council on Human-Centered AI. He is Chair of the Center for Trustworthy Technology. His LinkedIn Learning course, Ethics in the Age of Generative AI, is the most-viewed AI ethics course globally, reaching over 300,000 learners. Dhar holds a J.D. from NYU School of Law, an M.P.A. from Harvard Kennedy School, dual Bachelor’s degrees in Biomedical Engineering and Computer Science from the University of Illinois.

Register here.