On the 3rd of July 2024, the IEAI hosted a Speaker Series Event titled ‘Generative AI Systems: Ethical and Societal Issues, and Implications in the AI Act’ at the TUM Think Tank. We were delighted to have Raja Chatila, Professor Emeritus at Sorbonne University as our speaker for this discussion.

First, Prof.Chatila recalled the mechanisms of AI systems for images and text. For Generative Adversarial Networks (GAN) for Image Generation, Prof. Chatila explained how data distributions are learned and imitated through this model. In principle, GAN consists of two networks. While the first network tries to imitate the dataset by capturing the training data distribution with random noises, the second one works as a discriminator, deciding if the output from the first network matches the training set. Both networks improve their outputs simultaneously. The system converges when the discriminator can no longer discriminate the training set from the output. Prof. Chatila continued by saying that the emergence of deepfakes was a result of the GAN model.

Limitations of generative systems are inherent, because of the very workings of the system and the principles on which they are founded.

For Large Language Models (LLMs), which attempt to generate text from prompts, Prof. Chatila mentioned the major difficulty in generating text was that languages have ambiguities since they require context from preceding or successive sentences or words. In many cases, other sentences also determine the meaning of the word.

While transformer architectures address the problem of interpretation by injecting a proper context, this approach needs a large amount of data using a self-supervised learning process, also called pre-training, to build a statistical model of this data. 

The problem is not false data, but rather that the system currates data instead of using reasoning.

Prof. Chatila argued that the GenAI models and systems value chain was important for understanding current regulations. At the beginning of this chain, a massive amount of data is demanded, later transformed into a statistical model called the Foundation Model or the General Purpose AI Model. When an interface and query for responses are added, it will work similarly to ChatGPT. Reinforcement learning with human feedback is deployed to rank answers to reduce biases and other noise in the data. Although being a polymath in every topic, Prof.Chatila pointed out that it is not specific enough in any domain. This is why supervised learning, also known as fine-tuning, takes part in the process. Alternatively, the system can access a specific database to search for answers. Throughout the value chain, the role of provider and deployer is interchangeable, making it a significant aspect for regulations.

Following the technical discussion, Prof. Chatila highlighted three issues with generative systems. First, they mix true and false information causing misinformation. Second, they provide wrong answers for some mathematical tasks, suggesting that generative systems do not have grounded semantics. Third, they cannot have a world representation for context comprehension tasks. Additionally, he referred to other problems associated with statistical machine learning, such as the black box nature of the process, potential bias of output, and the environmental costs. Therefore, generative AI systems have raised numerous ethical, legal and societal concerns, including lack of transparency, data quality and responsibility for designers/developers/users/etc.

Symantics are in the context of the text that has been learned. The symantics are not grounded in reality, so let’s not speak about symantics as if it is the meaning of the word(s) in the real world. It is the meaning according to the correlative process.

Finally, Prof. Chatila underscored several important aspects related to Generative AI and the EU AI Act. In particular, he remarked that attention needs to be paid to clarifying terminology on “advanced general-purpose AI models” in terms of the word “advanced”. Furthermore, Article 55, addressing obligations for providers of “advanced general-purpose AI with systematic risk”, is difficult to operationalize as more work on model evaluation, adversarial testing, cybersecurity and infrastructure is needed. Within the model evaluation, metrics, quantitative scores for performance comparison and statistical testing are smaller domains to be further researched.

Prof. Chatila concluded with some remarkable notes. Fundamentally, Generative AI has inherent limitations that cannot be fixed, and both its model and system should be categorized as high-risk. Methods such as verification, testing and red teaming are sufficient. However, false or misleading output can only be reduced, not eliminated using quality data sources. In the value chain, legal responsibility should be shared for all actors.

We thank Prof. Chatila for his profound presentation on Generative AI systems. The event recording can be found here.