Aligning AI and Human Rights: Implications for Global Governance, National Regulation, and Corporate Social Responsibility.
The integration of artificial intelligence (AI) across various sectors—ranging from critical infrastructure, education, and public administration to healthcare, robotics, warfare, and beyond—has sparked serious concerns about its potential to harm human rights. These concerns span issues such as accessibility, algorithmic bias, discrimination, unequal treatment, and the risk of AI being misused against individuals or groups. AI’s reliance on personal data processing also raises privacy concerns, while challenges related to the accuracy, safety, and accountability of AI systems persist. Intellectual property violations, including the unauthorized use of copyrighted material, and questions around the ownership of AI-generated content further complicate this landscape. Additionally, the complex interaction between humans and machines, and the lack of algorithmic transparency, can exacerbate these risks.
AI also poses specific threats to freedom of expression and access to information, particularly through its role in content moderation and dissemination on social media platforms. The use of AI in spreading misinformation or manipulating public opinion undermines democratic processes, which highlights the urgent need for regulatory frameworks.
Through a combination of public outreach and academic and policy research into the human rights implications of AI, the IEAI has brought forward a research stream to align AI and human rights. This work includes:
- The International Summit on AI and Human Rights held in Munich on July 16.
- The development of a convention on AI, data, and human rights in collaboration with international stakeholders.
- The developing of a White Paper on the need for an international convention on AI, data, and human rights.
- The development of a IEAI Research Brief on AI for Protecting and Realizing Human Rights.
- Participation in international forums on AI and Human Rights, including an agenda to take the convention to the Human Rights Council.
In this research stream, the IEAI will foster collaboration between academic institutions, including Ludwig Maximilian University of Munich and Rutgers University, as well as NGOs like Globethics. These efforts serve to:
- Strengthening partnerships between academia, industry, civil society, and governments.
- Analyzing whether AI regulations, such as the EU AI Act, Digital Services Act, GDPR, and corresponding approaches for instance in the United States and Brazil, strike a fair balance between innovation and human rights protection.
- Promoting AI literacy and raising public awareness of AI’s potential impacts on society.
The IEAI invites other interested parties to join its efforts and to participate in its work in the AI and human rights space.
Research Output:
