Algorithmic auditing of AI systems has gained considerable recognition as an opportunity to harness the potential of AI models, as well as for detecting and mitigating the problematic patterns and consequences of their deployment in sensitive contexts such as healthcare, hiring or mobility, to name a few. In the last few years, research has shown how AI systems can produce predictions that disproportionately affect vulnerable minorities and contribute to the reinforcement of structural biases that already exist in societal contexts. Hence, algorithmic AI auditing procedures are increasingly necessary to understand, detect and mitigate the potential unintended consequences of the deployment of AI- enabled technology. This research brief addresses the topic of algorithmic auditing and the need for an interdisciplinary conversation on the topic.

Enjoy the read.