Idiomorphic AI: Emergent, Tailored, and Normative Behavior in Large Language Models
The recent launch of large language models (LLMs) such as OpenAI’s GPT-3 is a significant milestone in AI development. LLMs use deep learning to generate human-like text. This capacity for natural human language also makes them uniquely positioned for direct interaction with humans and underlines the urgency of calls to study their ethical aspects. As recent bias and fairness studies have revealed, LLMs show emergent behavior that often deviates from the intent and values of their developers. When trained on unfiltered text corpora, they capture linguistic knowledge and human behavioral traits implicitly present in the data. This project outlines the three types of behavior in LLMs. The first is emergent behavior (‘what is’), which studies whether text generated by base versions of LLMs exhibits reliable and internally consistent biases for human-like psycho-social qualities. Secondly, tailored behavior (‘what could be’) in which the team of researchers will create and study a prototype of idiomorphic AI that will be able to morph itself following the social, cultural, and psychological preferences of the interacting individual user. Finally, normative behavior (‘what should be’) focuses on the ethical issues linked to interactive AI agents that require a unique vantage point since their normative aspects cannot be examined in isolation but rather in their interaction with the human partner.
Research Output:
Research Brief: From Pen to Algorithm: Examining the Role of Content and Content Creators in AI Bias