Towards an Accountability Framework for AI Systems2022-11-09T13:59:51+01:00

Towards an Accountability Framework for AI Systems

The progress in Artificial Intelligence (AI) technology is tremendous. Today, the first autonomous vehicles are already driving thousands of kilometers on test routes without the need for major intervention of human drivers. Developments in this area are accompanied by increasingly powerful algorithms and methods from the field of machine learning. However, the increasing complexity of the used techniques also creates a more opaque environment in terms of decisions made by the technological tools. This lack of transparency means that certain decisions made by the AI system are neither recognizable nor understandable to the user, the developer or the legislator.

AI’s non-transparent behavior is usually referred to as a “black-box,” meaning that only the input and output variables are known to the developer. Explainable AI methods are concerned with resolving precisely this opacity and making complex AI systems more understandable and interpretable. This often conflicts with the fact that developers and researchers are searching for quick solutions to technical problems, leaving the questions of transparency and accountability on the sidelines.

However, transparency is necessary for a broad market introduction of AI-accelerated systems, as it is the basis of trust and the effective implementation of legislation. The aim of the research project is to develop a practical and unified accountability framework supported by transparent decisions in regard to AI risks, taking various stakeholder interests into account. To work towards the overall research goal of defining accountability for AI systems, four main questions will be investigated:

(1) Who is accountable?

(2) For what is someone accountable and against whom?

(3) How can you comply with your accountability duties?

(4) How can you give satisfactory explanation?

The research project will play a fundamental role in developing new approaches and designing comprehensive tools and frameworks to help navigate these issues and find reasonable and defensible answers.

Research Output:

Towards an Accountability Framework for AI: Ethical and Legal Considerations

Workshop “Accountability Requirements for AI Applications”

Workshop “Risk Management and Responsibility Assessment for AI Systems”

White Paper “Towards an Accountability Framework for Artificial Intelligence Systems”

Related Thesis Work:
“Impacts of AI on Human Rights” by Immanuel Klein. Ongoing.

“The impact of risks perception on trust and acceptability of Artificial Intelligence systems” by Yusuf Olajide Ahmed. Ongoing.

“Risks of AI Systems for Sustainable Development Goals” by Jose Muriel. Ongoing.

News & Updates

Principle Investigators

Markus Lienkamp
Markus LienkampTUM School of Engineering and Design
Christoph Lütge
Christoph LütgeTUM School of Social Sciences and Technology

Researchers

Auxane BochTUM School of Social Sciences and Technology
Ellen HohmaTUM School of Social Sciences and Technology
Rainer TrauthTUM School of Engineering and Design
Go to Top