From May 12–16, 2025, the alignAI consortium officially launched its activities with an intensive Kick-off and Seasonal School Week in Munich. Hosted by the Institute for Ethics in Artificial Intelligence (IEAI) at the Technical University of Munich, the event marked the beginning of a three-year Marie Skłodowska-Curie Doctoral Network focused on aligning Large Language Models (LLMs) with fundamental human values.
Over the course of five days, 17 interdisciplinary Doctoral Candidates (DCs), their supervisors and key project partners came together for a week of connection, dialogue and learning, laying the groundwork for this timely research project.
Kick-off Event: Building the Foundation
The week began at the TUM Think Tank with opening remarks from Prof. Christoph Lütge, followed by an introduction to the alignAI vision from Dr. Caitlin Corrigan and Auxane Boch. They presented the project’s unique structure—spanning five universities, two research institutes, and four industry partners—and its aim to bridge technical AI development with ethics, law and human values.
The heart of the evening was the Doctoral Candidate pitch session. Each DC delivered a two-minute introduction to their research focus, showcasing the program’s diversity in both discipline and background: from public health to human-computer interaction and from participatory design to legal governance of AI systems.
Closing the session, Prof. Urs Gasser reflected on the ethical challenges of keeping pace with rapidly evolving technologies. He urged participants to transform abstract values into practice, warning against “moving targets” that shift faster than our institutions can adapt.
The day wrapped up with networking over food and drinks, giving everyone in the team the opportunity to get to know each other and build a solid foundation for the journey ahead.
The alignAI Seasonal School
From May 13–16, the alignAI Seasonal School brought the project’s interdisciplinary vision to life. Held at the Garching campus of the Technical University of Munich, the four-day program provided a structured but interactive learning environment where theory met practice and collaboration for the DC began to take form.
Day 1: Grounding in Responsible Research
The school opened with a focus on responsible research practices and the ethical use of AI in academia. Prof. Sneha Das (DTU) led a hands-on session on how AI tools are and should be responsibly used in scholarly work, including tools like ResearchRabbit and SciSummary. This was followed by a reflection on the critical concerns related to AI use in research, including intellectual property law, algorithmic bias and foundational critiques by scholars like Buolamwini and Gebru, leading into a nuanced discussion on representativeness in data.
Nicole Lønfeldt followed with an engaging session on academic writing with AI, offering best practices to maintain intellectual integrity while using generative tools. By comparing GenAI policies from various universities, such as the University of Copenhagen, the group examined not only institutional stances but also common pitfalls, like the overuse of inflated language or the concerns about a loss of human creativity, error and nuance. A hands-on writing exercise encouraged participants to simplify convoluted sentences, avoid repetition and embrace clarity, showing that the highly elaborate and conflated tone of GenAI tools is not always a necessity. Lastly, the group collaboratively collected and shared strategies to overcome writer’s block and develop more effective, responsible writing habits.
In the final session, Dr. Nathanael Sheehan addressed data ethics, exploring the environmental, social and epistemic risks of large-scale data use. He emphasised the importance of data transparency and representativeness, reinforced by frameworks like FAIR and CARE principles.
Day 2: Exploring Values and Socio-Technical Systems
The second day began with a panel discussion on LLMs as socio-technical systems, moderated by Project Lead Dr. Caitlin Corrigan. The panellists—Prof. Ingo Zettler (Copenhagen), Dr. Daryna Dementieva (TUM), and Dalia Yousif Ali (TUM)—challenged participants to consider questions like: Whose values should we align LLMs with? and What does trustworthiness really mean in different cultural and linguistic contexts?
Key takeaways were that values are plural and context-dependent (not universal), biases in training data produce real-world harms, especially for underrepresented groups, trustworthy AI must acknowledge uncertainty (e.g., by learning to say “I don’t know”), and interdisciplinary methods and localised benchmarks are crucial for ethical and fair AI development.
The panel was followed by a thought-provoking session by Avi Gal, who led a participatory workshop centred on a case study involving an AI-powered child maltreatment hotline screening system. The scenario challenged students to engage as key stakeholders, ranging from social service agencies and police to tech developers and survivor advocacy NGOs, to evaluate the ethical implications of using AI to flag potential child abuse cases. Participants reviewed a wide range of sensitive data sources, including behavioural health records, social media profiles and credit scores, and were tasked with deciding which sources to include and how aggressively the system should flag cases. The exercise highlighted the difficult trade-offs between model precision and recall: an aggressive model might flag more at-risk children but generate higher false positives, while a more conservative model risks missing genuine cases. Through group discussion, students grappled with the real-world ethical, legal and technical complexities of deploying AI in high-stakes, socially sensitive domains.
The afternoon continued with a range of sessions that deepened the participants’ understanding of participatory and inclusive AI development. Santiago Hurtado led a session on conducting research with vulnerable groups, grounding the discussion in historical cases such as the Tuskegee Study and the Nuremberg Trials to underscore the ethical responsibilities researchers carry. This was followed by Daniel Gatica-Perez’s talk on participatory AI methodologies, which emphasised co-creation with diverse stakeholders, including children and marginalised communities, and challenged participants to design AI not merely as a technological tool, but as a socially embedded solution. The day concluded with use case introductions and workshops, where doctoral candidates and PIs began outlining the ethical, legal and technical parameters of the three core alignAI domains: mental health, education and online news consumption. These collaborative sessions laid the groundwork for the interdisciplinary teams that would shape the group presentations on the final day.
Day 3: Use Case Workshops
The third day moved into hands-on workshops organised around the three main use cases: Mental Health, Education and Online News Consumption.
In breakout sessions with Use Case PIs, the DCs began mapping challenges, identifying stakeholders and designing initial collaboration strategies. Discussions ranged from data accessibility and personalisation to legal compliance and ethical safeguards. Following the planning of the presentations for day 4, DCs, PIs and alignAI associates got together for dinner in the traditional setting of a Bavarian beer garden, recapping the days, planning for the future and relaxing from the informative yet intense workshops.
Day 4: Collaboration in Action
Friday was the culmination of the week’s learning and teamwork. Each group presented their vision for addressing one of the use cases, incorporating insights from law, ethics and technical design.
Mental Health: A Retrieval-Augmented Generation (RAG) chatbot to support parents in navigating care systems for children with suspected mental health conditions. The team emphasised municipal coordination, GDPR compliance and emotional empowerment.
Education: A values-driven, game-based learning environment designed to foster digital literacy and ethical awareness among school-aged children. Iteration, co-design and participatory research were key design pillars.
News Consumption: A personalised LLM-based news assistant exploring challenges around misinformation, transparency and editorial accountability. The group drew lessons from Il Foglio AI, Italy’s AI-generated newspaper, and analysed the risks of value misalignment.
Each team’s work was underpinned by cross-DC collaboration, with legal scholars, psychologists, designers and AI researchers pooling their expertise. Working groups began to take shape, laying the foundation for the network’s future efforts.
Takeaway
The Seasonal School offered more than lectures; it set an introductory example for what alignAI hopes to achieve: Embedding ethical reflection, interdisciplinary cooperation and legal insight into the development of trustworthy LLMs. It also made space for relationships to grow through lively debates, hands-on exercises and shared meals.
