On the 8th of November 2022, Prof. Dr. Christian Djeffal conducted a captivating presentation that delved into the topic of “Regulating AI: A Comparative View.” He focused on the regulation of emerging technologies such as Artificial Intelligence, quantum computing, and brain-computer interfaces, and how they are managed within the sectors of data law, IT-Security, and constitutional law.
He began bey highlight that with the topic of regulation, it is essential to tackle the tension surrounding trustworthy AI and the debate on whether there needs to be a normative aspiration or benchmark for its adoption. In his presentation, Dr. Djeffal listed various examples of International Organizations and countries which have taken steps toward creating general principles for the development of AI and its regulation. Examples include UNESCO´s Recommendations on Ethics of AI and the European Union Artificial Intelligence Act 2020, that concentrated on regulating AI systems, including prohibitions, regulation of high-risk systems, and transparency obligations. The United States published a “Blueprint for an AI Bill of Rights” in October 2022, which listed five main principles as well as an agenda on how to update current regulations on the topic; there are similar examples seen within different governmental systems, such as in China with their “Internet Information Service Algorithmic Recommendation Management Provisions.”
Following this, Dr. Djeffal outlined the three perspectives within AI regulation: “Level,” which focuses on the territorial range of the law, “Generality,” which explores what AI is comprised of and where it can be applied, and finally, “Regulatory technique,” which measures how AI can be regulated and what legal mechanisms can be used to do so. These perspectives were then applied to the examples mentioned earlier of AI regulations at the international and governmental levels.
These legal systems help create a communicative tool between the developers of such technologies, the government, and the public. Although they enhance a sense of trust, they also indicate the boundaries in which this trust lies to ensure transparency on the legality and illegality of certain practices. Furthermore, they require specificity, which measures the level of concreteness, and speed, which influences the innovation and adoption of technologies, actors, and agency, which are critical to creating and increasing levels of trust.
Dr. Djeffal then concluded his presentation with the preliminary lessons learned from this research: although Trustworthy AI is not an ideal solution, it is a result of a bargaining process. Similarly, Regulations can stabilize expectations in technologies, specify expectations, and control the speed of both innovation and adoption. Regulation also aids in defining the roles of various actors and establishing further control and agency over these technologies. We would like to thank Christian Djeffal for his insightful talk and taking the time to discuss this important issue with the IEAI community.