Dr. Adriano Koshiyama and Dr. Emre Kazim, two researchers from the University College London (UCL), presented their research on AI Assurance and Auditing at the latest IEAI Speaker Series that took place on February 4, 2021.

In their presentation, they explained the increasing importance of algorithmic auditing in the field of AI Ethics research. Whilst the last decade was significantly characterized by the debate on data protection and privacy, the two scholars argued that the shift is now towards the assessment of algorithmic systems in order to prevent the unintended consequences that might emerge with their deployment.

Algorithmic auditing is a complex research procedure that involves the assessment, mitigation and the assurance of an algorithm’s safety and legality. This process is divided into a governance and technical audit. A governance audit looks at the organizational level with different mitigation strategies to improve the systems that can be used in different business practices within the organization, which leads to a complex top-down approach involving many stakeholders. On the other hand, a technical audit involves a bottom-up approach that looks at the different phases of algorithmic development and deployment, which is the key focus of the research conducted by Dr. Koshiyama and Dr. Kazim.

The process of auditing is divided into four main phases. The first phase is the analysis of systems’ development, where the data, the models and many other aspects are inspected. Secondly, the assessment phase looks at the robustness, the issue of privacy as well as the fairness and explainability of the system. Next, after the assessment of the algorithm, there is a need to make adjustments to mitigate the potential issues that might have created biases and other threats. Lastly, the algorithm must go through an assurance process, which takes place on a case-by case basis and involves a technical and impact assessment and analysis of uncertainties, in order to pre-determine standard practices and needs for regulation.

As these systems are increasingly being deployed in societal context, this topic has become incredibly important. There is a need to create standards and regulation in order to ensure that these systems serve the human’s well-being. Nevertheless, such research comes with great challenges because translating accountability, fairness and transparency into practice is very complicated and requires the interaction of different actors including companies developing the systems, external researchers that are engaged in oversight practices and the governments trying to apply standards to these systems. Dr. Koshiyama and Dr. Kazim explained the importance of a trans- and interdisciplinary approach in this emerging research field in our engaging discussion session that followed their presentation.

We also had the pleasure of speaking with Dr. Koshiyama and Dr. Kazim on the IEAI Q&A: Reflections on AI. We would like to thank Dr. Koshiyama and Dr. Kazim for their interesting presentation.