Implications for Organizations and Global Security

On the 24th of September 2025, the IEAI hosted at the TUM Think Tank a panel discussion titled “Thinking about Ethical Use of AI in the Military – Implications for Organizations and Global Security”. TUM IEAI Executive Director, Dr. Caitlin Corrigan, served as moderator to the speakers Brigadier General David Barnes, PhD (US Army, Retired) (Empowering AI) and Lance Lindauer (Partnership to Advance Responsible Technology).

Corrigan kicked the panel off citing Nobel Peace Prize laureate Maria Ressa’s call for red lines in AI at the 80th anniversary of the United Nations General Assembly. She asked the panelists how they think autonomous decision-making systems and agentic AI have changed the security landscape.

Barnes attributed the concerns driving proposal on red lines to something greater, that important decisions relevant to security should only be left to human decision makers. He also emphasized that only a select group of countries in the world are able to fully capitalize on the use of AI, widening relative capacity gaps and adding to the complexity of the security environment.

“The question is how much do we let artificial intelligence, with its ability to not only help inform people making those decisions, be able to be empowered to make decisions and replace the human decision maker? That becomes this really scary place for many people.” – David Barnes

Lindauer invited audience members to think about the verbiage, noting the phrase ‘AI arms race’ is often used when discussing safety, militarization and national security. He proposed shifting the narrative and additionally widening the focus beyond the military, as the private sector is becoming an important player in the security sector, with the military turning to it more and more for answers.

Picking up on Lindauer’s mention of ‘AI arms race’, Corrigan then asked the panelists if AI can really be compared to nuclear arms or other technologies that have changed the way global security operates.

According to Lindauer, the setting is indeed quite different, as AI is ubiquitous and in everyone’s hands. He then told a story about his technologically unsavvy mother, who has access to agentic AI via her laptop, but doesn’t understand it.

“People don’t have access to nuclear weapons and uranium …. The fact that [AI is] extraordinarily ubiquitous, makes it a little bit more scary, interesting and worth watching.” – Lance Lindauer

The floor was then handed over to Barnes, who agreed with Lindauer, stating the situation is quite different than when dealing with the more physical world. He invited audience members to see AI as a three-legged stool: the algorithm, the data and the computing power. AI is the tool that enables other systems, we should therefore treat AI not as a thing, but a method to allow for faster decision making.

Corrigan then asked the panelists for their opinions about the role of research and academia in AI development, asking if researchers are taking a back seat.

In reply, Barnes mentioned a great shift over the last few decades from the government at forefront of technological innovation, being the greatest funder with the largest amount of brain power, to now seeing the private sector (and sometimes academia) sitting at the helm. Still, Barnes sees a disconnect between these entities; they “seem to speak the same language, but are talking past each other”. He proposed generating improved collaboration amongst the groups by magnifying expertise and identifying common friction points and common timelines and goals.

Lindauer agreed, proposing more interdisciplinary collaboration. He remains optimistic that even in periods of retrenchment, researchers and the private sector will pick up the slack, adding that he sees this as a period of academia stepping forward and helping to coalesce priorities and ideas.

In her final question, Corrigan asked the panelists about the dual-use nature of AI and what that means from a regulatory perspective.

The floor was then handed over to Lindauer, who reinstated Barnes’ previous concern that there is an imbalance, a divide between the Global South and the Global North. This along with differences in social norms, in starting points of technological readiness and in public sector’s capacity to enforce regulation all add to the complexity to the picture.

The audience was then invited to ask questions which touched upon topics such as norms and regulation in AI, accountability, the rapid development of technology, a need for more transparency and concerns that the lay public are being left out of the conversation.

Productive conversation continued on after the panel between panelists and audience members.

The IEAI thanks the TUM Think Tank for their support and the audience members and panelist for making this an insightful afternoon that delved into the importance of AI ethics, its interactions in research and what steps could be made to optimize its evolution in the future.

The recording of the event can be found here.

You are currently viewing a placeholder content from YouTube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information
David Barnes

David Barnes

Lance Lindauer

Lance Lindauer