Human cooperation with machines is feasible. But there might be trade-offs between transparency and efficiency. 

Iyad Rahwan, the director of the Max Planck Institute for Human Development in Berlin, was the guest speaker of the IEAI Speaker Series session on Tuesday, 1 December 2020. In his presentation, Prof. Dr. Rahwan analyzed the relevance of the research field focusing on human cooperation with and through machines.

He explored the difficulty of solving the dilemmas that emerge when humans cooperate to decide how machines should operate. Focusing heavily on the example of autonomous vehicles, he explained that people essentially agree on the ethical principles such as utilitarianism. However, they are not always willing to comply with those rules, which leads to the ethical opt out problem. Solving social dilemmas issues through regulation could lead to a meta-ethical dilemma, he explained. For instance, if certain ethical criteria are established that place heavier risk on the driver, people might end up not buying self-driving cars and overall vehicle safety will not improve.

Professor Rahwan was also part of the research team that worked on the Moral Machine Experiment. During his presentation, he gave insights into the design of the experiment, which generates systematically randomized scenarios of dilemmas regarding autonomous driving. The Moral Machine aims to reach consensus on the design of autonomous driving systems by engaging the public and educating them on the topic. The experiment outlines interesting details around the different preferences of the public that can be shaped by different factors such as cultural background. In spite of the low likelihood of the extreme scenarios analyzed in the experiment, the same logic applies to decisions that a vehicle has to face on a daily bases. Therefore, it is necessary to reach a consensus and consider the potential trade-offs that affect different road users in different ways.

In the second half of the event, the topic of human collaboration with machines was explored, particularly in the context of non-cooperative by design robotics. Humans have the tendency to cooperate less with bots and, therefore, a trade-off between efficiency of providing services and maximizing transparency emerges.

Because humans tend to cooperate less with machines, Professor Rahwan explored the positive impact of using deception in some cases. This leads to bigger ethical question: Should companies avoid disclosing when consumers/users are interacting with bots in order to foster human-machine cooperation? Deciding how to design human-machine cooperation under such conditions is increasingly relevant as, in the future, machines might have their own agency and, therefore, might choose whether or not to cooperate with humans.

Professor Rahwan concluded his talk by presenting his work in a fascinating new field of machine behavior.  The aim is to develop behavioral metrics to compare algorithms and explore how algorithms operate and change by interacting with humans.

We want to thank Professor Rhawan for his time and the great discussion we were able to have with him.