domingo, 6 de agosto de 2023

Exploring the psychology of LLMs' Moral and Legal Reasoning (Pre-Print)

Large language models have been used as the foundation of highly sophisticated artificial intelligences, capable of delivering human-like responses to probes about legal and moral issues. However, these models are unreliable guides to their own inner workings, and even the engineering teams behind their creation are unable to explain exactly how they came to develop all of the capabilities they currently have. The emerging field of machine psychology seeks to gain insight into the processes and concepts that these models possess.

In this paper, we employ the methods of psychology to probe into GPT-4's moral and legal reasoning. More specifically, we investigate the similarities and differences between GPT-4 and humans when it comes to intentionality ascriptions, judgments about causation, the morality of deception, moral foundations, the impact of moral luck on legal judgments, the concept of consent, and rule violation judgments. We find high correlations between human and AI responses, but also several significant systematic differences between them. We conclude with a discussion of the philosophical implications of our findings.

[ PDF ]

© Como citar este pre-pring:
Almeida, GuilhermeNunes, José Luiz; Engelmann, Neele; Wiegmann, Alex; Araujo, Marcelo de (2023, 2 August) "Exploring the psychology of LLMs' Moral and Legal Reasoning". https://doi.org/10.48550/arXiv.2308.01264.