The IEAI discusses the State of Responsible AI in 2024
On the 18th of November 2024, the IEAI hosted a virtual Speaker Series on “The State of Responsible AI in 2024: Industrial, Organizational & Technical Foundations”. We were pleased to have Alejandro Saucedo, Director of Engineering and Product at Zalando to share his perspectives on the current state of responsible AI.
Mr. Saucedo began by addressing the motivation behind his work on responsible AI, firstly noting AI’s impact on society. This impact has significantly increased in the recent past and is projected to continue trending upwards in the future. Secondly, AI systems pose different challenges compared to traditional software, creating a new demand for responsible AI.
Mr. Saucedo then presented a timeline of efforts to create a more responsible AI divided into three stages. Beginning with a first stage in 2017, companies and academics started working on higher levels of principles in responsible AI. The second stage (2018 – 2022), named “Trustworthy AI”, was focused on increased research efforts from multi parties such as the European Union and other bodies such as ISO and IEEE. “Safe AI” (2022 – now) is the third and the most current phase for responsible AI, which underscores the need for practical tooling, processes and best practices to support the implementation of safe AI. Some examples for safe AI include the AI explainability framework released in 2018 or the AI Security Framework 2022.
There is no such thing as AI that is ethical.
Next, Mr. Saucedo delved into AI market trends. He referred to 2017 and 2018 as the era of AI principles, where organizations grappled with setting up ethical AI goals to aim towards. He pointed out the vital role of users when generating industry standards. Nowadays, multiple organizations are open to letting users be involved in the design, development and use of standards, such as the OECD, IEEE and ISO. In light of AI regulation, the EU and ACM have followed this same trend by allowing users to comment on the AI Act and AI Regulatory Proposal.
In the last section, Mr. Saucedo elaborated on why AI ecosystems require collaboration from diverse stakeholders rather than data scientists as a single practitioner. AI’s impact on society needs to be understood within an ecosystem, comprised of an AI model and AI systems. He then explained the impact of AI in terms of business values, which depend on use cases.
For instance, the potential impact of LLMs and GenAI is not within the models themselves, but in the interconnected AI-systems. Because of this, mechanisms for accountability must include not only ML expertise but also domain expertise and policy expertise. The combination of these three factors, Mr. Saucedo postulated, is what developed standards and best practices should possess.
Moreover, the AI ecosystem requires suitable organizational structures. At the top, a department or organization should be responsible for high level principles. In the middle, a team or process should identify domain experts and skillsets, etc. The bottom level is comprised of individual practitioners, as well as technology best practices and relevant tools. As for programmatic governance, he highlighted:
Open source is now the backbone for critical infrastructure that runs our society.
However, Mr. Saucedo noted that open source needs to align with higher principles, which are useless without a strong foundation.
Mr. Saucedo focused on two key points in his concluding statements. First, in reality it’s always boiled down to how AI impacts humans. Second, not everything requires an AI-related solution. Indeed, a only very small amount of problems in the world require AI & ML solutions. Referring to an often used metaphor, he concluded:
When you run around with a hammer everything may look like a nail.
We thank Alejandro Saucedo for his insightful presentation on the State of Responsible AI in 2024. The recording of the event can be found here.