On February 15th, Prof. Jeannie Marie Paterson conducted a captivating virtual presentation on “ChatGPT, Ethics, Law and the Digital Integrity Arms Race,” a particularly relevant topic as the rapid development of Natural Language Processing tools is raising critical ethical questions.
Prof. Paterson outlined the significant debate on how this technological innovation may affect professions within translation, journalism or the legal field, as well as how it may affect education and the integrity of the assessment. She also presents a few esoteric debates, such as what ChatGPT means for the theory of mind, the understanding of the sentient and the analysis of the divide between humans and machines. She argued that a clear understanding of the opportunities and risks is the first step in outlining the appropriate responses that respect social values and individual rights.
ChatGPT was first launched in November 2022. Its function is to generate text in response to a prompt given by the user through neural language generation involving the training of a neural network on a dataset of text taken from various sources, including books, articles, and websites. ChatGPT is a form of generative AI that concentrates on generating new or original content without being preprogrammed and without human rules. Prof. Paterson underscored in her talk that it is important to remember that the tool does not have understanding or intention.
Although this innovation may create higher productivity and save time, it has created fears about how this would affect professions or services that relies on writing. Using the legal professor as an example, Prof. Patterson gave the example of how developments in the availability of tools like ChatGPT could lead in a two-tier system: higher-income individuals have access to human layers, and lower-income individuals have access to ChatGPT.
In the field of education, ChatGPT may aid in refining students’ writing skills and assist those with disabilities or those who are learning in a language that is foreign to their own. However, there are concerns that it will facilitate cheating that is difficult to detect. Prof. Patterson, however, argued that ChatGPT has the ability to improve written outputs, but it will not replace the importance of human effort. This is because it could occasionally generate incorrect information and produce harmful instructions or biased content.
Prof. Paterson outlined her predictions that there will be human-machine collaboration, making the outcome of ChatGPT prompts more sophisticated as it takes in more data. For this reason, it could be more difficult for detection technologies to distinguish text generated by ChatGPT, creating a sort of Arms Race. For this reason, it is essential to think about other responses within the law, developing further ethical frameworks to ensure that the use of CHatGPT is fair, equitable, transparent and accountable. It is also important to ensure that tools like ChatGPT does not intimidate or replace human interactions.
Prof. Paterson concludes her presentation with her suggestions on how to go forward with ChatGPT. It is still difficult to predict the direction it will go, but it is crucial not to catastrophize or inflate the tool’s capacities. Developing a global understanding of how ChatGPT works and its limitations is important. This would offer the opportunity to further reflect on our relationship with technology. Prof. Paterson argued that ChatGPT will not cause cognitive decline, but overreliance on technology may eroded the bonds of relationships that are crucial in human societies. For this reason, it should be seen as a tool, not a friend.
The event ended with a very interesting and productive discussion with the audience and moderator, Prof. Dr. Christoph Lütge. The IEAI would like to thank Prof. Jeannie Marie Paterson for her insightful talk and for taking the time to discuss this important and highly relevant issue with the IEAI community.