Unveiling the Ethical Dimensions of AI Research

On the 14th of February 2024, the IEAI and TUM Think Tank hosted a hybrid panel discussion titled ‘Unveiling the Ethical Dimensions of AI Research: Exploring the Intricate Relationship between AI and Human Behaviour’. Auxane Boch, research associate from the TUM IEAI, served as moderator to the speakers: Dr. Ida Skubis (Silesian University of Technology), Tom Lindemann (Luxembourg Agency for Research Integrity) and Professor Yueh-Hsuan Weng (Kyushu University). Yonah Welker (Board member – Yonah.org, EU Commision Projects) provided the keynote.

Tuning in via Zoom, Yonah Welker began the evening presenting on the relationship between AI and human behavior, with an emphasis placed on persons with disabilities and their interactions with AI. He underlined that a significant portion of the world’s population have a disability – namely 1.6 billion people. AI, Welker proposed, could help with eliminating social barriers and empowering accessibility for this population. Each disability, however, is unique and this could cause challenges for algorithms. Welker proposed implementing a disability-centered AI ecosystem and taking care to acquire data that represents all who are affected (i.e. accounting for gender, invisible disabilities, ethnic groups and all age groups) and data that carries sufficient evidence (long-term observation vs. short-term observation). Welker then ended his presentation addressing different AI models and systems, pointing out their pitfalls and suggesting areas of improvement.

Algorithms do not bring errors themselves but mirror the society who created them. – Yonah Welker

The floor was then handed to Dr. Ida Skubis, who presented a new phenomenon in corporate management – humanoid robots as CEOs. Dr. Skubis proposed that humanoid robots – that is, robots with humanlike features, are ideal for leadership roles since they possess exceptional data processing capabilities and can work around the clock, as they do not fatigue like their human counterparts. Robots do, however, have their limitations. Robots lack the ability to understand a meaning of purpose, imagination and responsibility among other things. Dr. Skubis then touched upon some examples of humanoid robots in leadership roles, such as Tang Yu – CEO as NetDragon Websoft and MIKA – CEO at Dictador. Dr. Skubis then underscored that although these two human robots both carry the title CEO, they have and must have human oversight. She referred to four forms of human oversight including requiring human review, ensuring human intervention, real-time monitoring and operational constraints.

The incorporation of humanoid robots in leadership roles changes the way we perceive, strategize and execute business objectives. – Dr. Ida Skubis

Following Dr. Skubis’ insightful talk, the next speaker, Tom Lindemann, shifted focus to research integrity. In his presentation, Lindemann introduced the Luxembourg Agency for Research Integrity (LARI). He delineated what research ethics mean and briefly touched upon the differentiation of integrity and ethics as applied in research. Lindemann then introduced principles of research integrity such as reliability in ensuring the quality of research and accountability for research, from its inception all the way to its publication citing The European Code of Conduct for Research Integrity. He underscored just how deep-reaching the impact of LLMs are on all phases of research, and concluded that transparency about the usage of AI tools is key. Moreover, Lindmann proposed accountability should always remain with researchers, assessment should incentivize responsible conduct of research, providing operational guidance and the availability of education programs for all career stages.

Accountability should always, and can only, remain with researchers. – Tom Lindmann

Prof. Yueh-Hsuan Weng then held the last presentation on the topic of AI ethics standardization – specifically in the field of robotics. Coming from a legal background, Prof. Weng has learned much from working with engineers, recalling two lessons he has learned from them. The first is the need to acknowledge cultural difference when working with AI. Cultural difference, as Prof. Weng uses the term, is meant more in the sense of a person’s educational background and not their country of origin. The second lesson Prof. Weng learned pertains to a lay group’s acceptance of risk. This, he says, is dependent upon AI’s embodiment (as a robot in this instance). Prof. Weng then shifted focus to law in AI, illustrating a pacing problem in AI ethics. He questioned how law can regulate AI if it cannot keep pace with AI’s rapid developments. He proposed to use AI ethics standards as an alternate way. Prof. Weng admitted that AI ethics standards has its problems as well, as it uses two variations of methods. He, however, was then able to demonstrate through using a Venn diagram, that overlapping elements can be used to create a new, more unified variation or type of AI ethics standards. This more unified standard acts as an infrastructure for Prof. Weng in his role as the Chair for the IEEE Standards Association P7017TM to develop an ethical design matrix for social robots. Moreover, Prof. Weng is involved in the development of an ethical design database relevant to social robots, physical assistive robots and the religious use of robots.

When I refer to cultural difference, it’s another aspect – it’s an organizational cultural gap. – Prof. Yueh-Hsuan Weng

The evening concluded with a panel discussion on ethical dimensions of AI research between the audience and the panelists: Dr. Ida Skubis, Tom Lindemann and Prof. Yueh-Hsuan Weng.

The IEAI thanks the TUM Think Tank for their support and the participants and speakers for making this an insightful evening that delved into the importance of AI ethics, its interactions in research, and what steps could be made to optimize its evolution in the future.

The recording of the event can be found here.