alignAI: Aligning LLM Technologies with Societal Values
The alignAI Doctoral Network will train 17 doctoral candidates (DCs) to work in the international and highly interdisciplinary field of LLM research and development. The core of the project focuses on the alignment of LLMs with human values, identifying relevant values and methods for alignment implementation. Two principles provide a foundation for the approach. First, explainability is a key enabler for all aspects of trustworthiness, accelerating development, promoting usability, and facilitating human oversight and auditing of LLMs. Second, fairness is a key aspect of trustworthiness, facilitating access to AI applications and ensuring equal impact of AI-driven decision-making. The practical relevance of the project is ensured by three use cases in education, positive mental health, and news consumption. This approach allows us to develop specific guidelines and test prototypes and tools to promote value alignment. We follow a unique methodological approach, with DCs from social sciences and humanities “twinned” with DCs from technical disciplines for each use case (9 DCs in total), while the other 8 DCs carry out horizontal research across the use cases.
alignAI website: alignai.eu