Towards an Accountability Framework for AI Systems

The progress in Artificial Intelligence (AI) technology is tremendous. Today, the first autonomous vehicles are already driving thousands of kilometers on test routes without the need for major intervention of human drivers. Developments in this area are accompanied by increasingly powerful algorithms and methods from the field of machine learning. However, the increasing complexity of the used techniques also creates a more opaque environment in terms of decisions made by the technological tools. This lack of transparency means that certain decisions made by the AI system are neither recognizable nor understandable to the user, the developer or the legislator.

AI’s non-transparent behavior is usually referred to as a “black-box,” meaning that only the input and output variables are known to the developer. Explainable AI methods are concerned with resolving precisely this opacity and making complex AI systems more understandable and interpretable. This often conflicts with the fact that developers and researchers are searching for quick solutions to technical problems, leaving the questions of transparency and accountability on the sidelines.

However, transparency is necessary for a broad market introduction of AI-accelerated systems, as it is the basis of trust and the effective implementation of legislation. The aim of the research project is to develop a practical and unified accountability framework supported by transparent decisions in regard to AI risks, taking various stakeholder interests into account. To work towards the overall research goal of defining accountability for AI systems, four main questions will be investigated:

(1) Who is accountable?

(2) For what is someone accountable and against whom?

(3) How can you comply with your accountability duties?

(4) How can you give satisfactory explanation?

The research project will play a fundamental role in developing new approaches and designing comprehensive tools and frameworks to help navigate these issues and find reasonable and defensible answers.

Research Output:

Towards an Accountability Framework for AI: Ethical and Legal Considerations

Workshop “Accountability Requirements for AI Applications”

News & Updates

Principal Investigators

Researchers

Prof. Dr. Markus Lienkamp, Institute of Automotive Technology, TUM

Prof. Dr. Christoph Lütge

Prof. Dr. Christoph Lütge, TUM School of Governance, TUM

Researchers

  • Auxane Boch, Institute for Ethics in AI, TUM
  • Ellen Hohma, Institute for Ethics in AI, TUM
  • Rainer Trauth, Institute of Automotive Technology, TUM