It’s not so long ago that we thought of AI as belonging solely to the realm of computer scientists, mathematicians and technical experts. But it’s time to move away from this idea.
Who might need to be involved to create an AI system that helps us detect wildfire threats in different forests? Or creating an LLM which aids in the detection of cancerous cells? Or an AI that facilitates smart energy use in a home?
AI development, today, takes teamwork and collaboration across disciplines. Indeed, AI applications have vastly different applications across a variety of sectors. In fact, AI systems and LLMs are beginning to be employed in almost every part of everyday life, touching a myriad of sectors and occupations. However, catering to the needs of various industries requires AI systems to draw on many types of knowledge. Today, we will be talking about interdisciplinary collaboration for valued aligned LLMs and how this type of work forms the foundation for an ethical approach to AI.
Consider AI systems designed for use in a hospital setting, such as one geared toward mental health. Such a system would need to be integrated into clinical practices where medical expertise is required. Additionally, the AI system itself would need to be functional and efficient regarding its technical elements, requiring the involvement of engineers, computer scientists and other technical personnel. The AI system would also need to comply with rules that span different areas of knowledge, including medico-legal regulations, bioethical values and energy consumption considerations. Similarly, an AI system designed for online news consumption would need to integrate journalistic practices and values, while AI systems developed for education would need to consider students’ and teachers’ needs. All these examples represent highly specialised use-case areas that require AI to function in different ways (Corner, 2024).
Interdisciplinary collaboration is a process embedded throughout both the creation and use of value-aligned artificial intelligence across sectors. Collaboration between different disciplines is necessary to create AI that is truly useful and multifaceted. The use cases that are part of the alignAI project—in the areas of mental health, online news consumption and education—also represent areas of opportunity for interdisciplinary collaboration.
But How Do We Implement Interdisciplinary Collaboration?
One practical approach is to consider design methodologies that outline how collaborations between subject matter experts and technical experts can optimise the utility and alignment of AI with human values across various use cases (Bisconti et al., 2023).
Notable among these is “Value by Design”, a methodology that employs an “integrative tripartite methodology, which involves conceptual, empirical and technical investigations, employed iteratively” (Friedman et al., 2020). Another example is “Ethics by Design”, which describes the approach of embedding ethics into every step of the engineering and design process (Brey & Dainow, 2024). These methodologies focus on combining ethical principles with technical applications in the creation of AI-driven systems and translating ethical requirements into tangible tasks and practices (Bisconti et al., 2023).
Research has also identified many other types of approaches. A 2021 review by Donia and Shaw identified 18 different “ethics-first” approaches that attempt to harmonise philosophical and ethical values with design principles and processes. This multitude of approaches can provide evidence-based steps for practitioners seeking to collaborate efficiently in interdisciplinary settings.
Are interdisciplinary considerations absolutely required for individuals working with AI? European regulatory approaches require AI systems to abide by a broad range of standards that address issues across various areas. Legally binding policies and acts such as the EU AI Act, which sets standards for the creation and use of artificial intelligence according to certain values and ideas, can be interpreted as a call for collaboration between experts in human behaviour, interaction, philosophy and experts in technical disciplines to produce aligned artificial intelligence that reflects human values.
The “risk-based approach”, for instance, that forms the basis of the EU AI Act, necessitates the consideration of seven principles that “include human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability”. From a European perspective, it is not sufficient for an AI system to merely fulfil privacy and data governance regulations without also being transparent, nor to address fairness without being environmentally responsible. Standards and policies such as the EU AI Act assert the necessity of meeting ethical requirements from multiple perspectives. These standards span diverse perspectives on ethical alignment and enshrine the importance of conceptualizing ethical AI in a multifaceted manner.
Valid concerns remain, however, about how to conduct interdisciplinary collaboration in AI and to what end. One criticism is that legally binding requirements do not result in interdisciplinary collaboration and aligned AIs, but rather force individuals and organisations to determine how to present their technologies as morally acceptable (Morley et al., 2021). The concern is that individuals and groups will find ways to avoid the trouble of looking to different perspectives in order to expedite the development of AI-driven systems and/or achieve other individual and organisational goals.
Also, practical challenges of interdisciplinary collaboration include reconciling different values, languages and standards (Sadek et al., 2025). Conflicts between aims and practices can threaten collaboration objectives and lead to confusion among teammates (Sadek et al., 2025). The prioritisation of competing ethical values and principles often requires consensus-building processes that are both time-intensive and resource-demanding.
Clearly, interdisciplinary collaboration in artificial intelligence is no small feat and requires cooperation and collaboration at multiple levels. Developing a holistic strategy for navigating this goal—one that reconciles abstract concepts of policy-oriented, interdisciplinary standard-setting with the practical challenges of including many voices at the table, will be necessary for achieving the larger aim of creating ethically aligned AI systems.
References:
Bisconti, P., Orsitto, D., Fedorczyk, F., Brau, F., Capasso, M., De Marinis, L., Eken, H., Merenda, F., Forti, M., Pacini, M., & Schettini, C. (2023). Maximizing team synergy in AI-related interdisciplinary groups: An interdisciplinary-by-design iterative methodology. AI & SOCIETY, 38(4), 1443–1452. https://doi.org/10.1007/s00146-022-01518-8.
Brey, P., & Dainow, B. (2024). Ethics by design for artificial intelligence. AI and Ethics, 4(4), 1265–1277. https://doi.org/10.1007/s43681-023-00330-4.
Corner, C. (2024). Rethinking artificial intelligence from the perspective of interdisciplinary knowledge production. AI & SOCIETY, 39(6), 3059–3060. https://doi.org/10.1007/s00146-023-01839-2.
Donia, J., & Shaw, James. A. (2021). Ethics and Values in Design: A Structured Review and Theoretical Critique. Science and Engineering Ethics, 27(5), 57. https://doi.org/10.1007/s11948-021-00329-2.
Friedman, B., Kahn, P. H., & Borning, A. (n.d.). Value Sensitive Design and Information Systems.
Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mökander, J., & Floridi, L. (2021). Ethics as a Service: A Pragmatic Operationalisation of AI Ethics. Minds and Machines, 31(2), 239–256. https://doi.org/10.1007/s11023-021-09563-w.
Sadek, M., Kallina, E., Bohné, T., Mougenot, C., Calvo, R. A., & Cave, S. (2025). Challenges of responsible AI in practice: Scoping review and recommended actions. AI & SOCIETY, 40(1), 199–215. https://doi.org/10.1007/s00146-024-01880-9.
Further reading/watching/listening:
Books & Articles:
Hirsch-Kreinsen, H. (2024). Artificial intelligence: A “promising technology”. AI & SOCIETY, 39(4), 1641–1652. https://doi.org/10.1007/s00146-023-01629-w.
Schmutz, J. B., Outland, N., Kerstan, S., Georganta, E., & Ulfert, A.-S. (2024). AI-teaming: Redefining collaboration in the digital era. Current Opinion in Psychology, 58, 101837. https://doi.org/10.1016/j.copsyc.2024.101837.
Umbrello, S., & van de Poel, I. (2021). Mapping value sensitive design onto AI for social good principles. AI and Ethics, 1(3), 283–296. https://doi.org/10.1007/s43681-021-00038-3.
Winkler, T., & Spiekermann, S. (2021). Twenty years of value sensitive design: A review of methodological practices in VSD projects. Ethics and Information Technology, 23(1), 17–21. https://doi.org/10.1007/s10676-018-9476-2.
Yelne, S., Chaudhary, M., Dod, K., Sayyad, A., & Sharma, R. (2023). Harnessing the Power of AI: A Comprehensive Review of Its Impact and Challenges in Nursing Science and Healthcare. Cureus. https://doi.org/10.7759/cureus.49252.
