Yi Zeng, Professor at the Institute of Automation, Chinese Academy of Sciences, presented his research at the latest IEAI Speaker’s Series Event, which took place on April 29, 2021.

Professor Zeng discussed the importance of defining the short-term, as well as long-term, challenges emerging in the field of AI Ethics research.

In the short term, he suggested that issues such as privacy protection raise questions around the ability of state of the art of the technology to protect our civil rights. Ensuring privacy is very complex in practice. For instance, if someone’s data was used to train an AI model, once the model has learned from it, the data cannot be retrieved from the model, unless the entire model gets re-trained. The user’s information is entrenched in the model.

With long term planning, he analyzed the importance of protecting vulnerable parts of the populations such as children. This is particularly important given the emergence of problematic technologies that recognize, for instance, emotion. Professor Zeng believes that we have to be careful about the choices we make today, as they will significantly impact next generations as well. In his lab, he works on designing AI systems in ways that still protect children’s rights in order to mitigate potential unintended consequences.

Raising global consensus for building AI systems that can support the humans well-being plays an important role in enabling the discussion around the long-term strategies that need to be implemented. Moreover, Professor Zeng believes that the absence of the long-term preparation around ethics of super-intelligence and AGI is concerning, as the societies are really not ready to interact with such systems, even if they were technically feasible in the next decades.

Professor Zheng concluded by explaining the importance of considering the potential opportunities and challenges that may emerge from the use of AI systems in different geo-political contexts. This comparative perspective is needed, in order to learn the extent to which these technologies can have a positive impact in specific contexts.

Overall, he believes that AI should be considered a supportive tool, instead of decision-making tool.

Yi Zeng | April 29th, 2021