On July 19th, the Institute of Ethics in Artificial Intelligence (IEAI) hosted an in-person event at the TUM Think Tank titled “AI and Human Rights: Implications for Corporate Responsibilities.” The event, moderated by IEAI Executive Director Caitlin Corrigan, brought together experts to delve into the increasing role of business enterprises’ in upholding human rights and the related challenges posed by the growing integration of artificial intelligence (AI) in various business sectors.
Caitlin Corrigan opened the event by emphasizing the importance of business entities’ responsibilities in upholding human rights. However, incorporating AI technologies into business practices has sparked questions about interpreting existing corporate human rights responsibilities regarding these advancements. This has led to discussions on the implications of AI on human rights, the potential adverse impacts it may generate, and the measures that can be implemented to prevent human rights violations during AI development and deployment.
The event started with a presentation by Alexander Kreibitz, a post-doctoral researcher at the TUM Institute of Ethics in Artificial Intelligence. He focused on the progress of AI ethics and emphasized the importance of embedding human rights principles into AI design and development, providing insights into the necessary steps for achieving this goal.
Following Kreibitz’s presentation, Katie Evans, a consultant with the IEEE, delivered a conceptual exploration of AI ethics. Defining both ethics and AI ethics, she presented two distinct perspectives on approaching AI ethics—structural and decisional ethics—to lay a solid foundation for further discussions during the event.
Jenny Le, a Senior Manager at EY, then discussed responsible innovation through risk management. She highlighted the significance of responsible innovation for the success of data-driven technologies and stressed the need for robust risk management practices to ensure ethical AI deployment.
Mario Tokarz, a leader in AI and digital transformation, delivered the final presentation. His talk centered on an engineering perspective of the implications of AI on human rights and corporate responsibilities. He addressed common fears surrounding AI’s evolution and proposed approaches to address these concerns.
After the presentations, the floor was opened for a stimulating discussion moderated by Caitlin Corrigan, where the event speakers engaged in a deep exploration of AI ethics, human rights, and their importance in our society. The panelists and participants actively exchanged ideas, contributing to a multifaceted understanding of the relationship between AI and human rights in corporate contexts.
The event concluded with a call to further these vital conversations as ethical considerations become increasingly crucial in the rapid advancement of AI technologies. The IEAI would like to thank the speakers for their participation and to underline our commitment to fostering these types of dialogues on AI ethics.