domingo, 16 de junho de 2024

Are Large Language Models moral hypocrites? A study based on Moral Foundations (pre-print article / 2024)

Large language models (LLMs) have taken centre stage in debates on Artificial Intelligence. Yet there remains a gap in how to assess LLMs' conformity to important human values. In this paper, we investigate whether state-of-the-art LLMs, GPT-4 and Claude 2.1 (Gemini Pro and LLAMA 2 did not generate valid results) are moral hypocrites. We employ two research instruments based on the Moral Foundations Theory: (i) the Moral Foundations Questionnaire (MFQ), which investigates which values are considered morally relevant in abstract moral judgements; and (ii) the Moral Foundations Vignettes (MFVs), which evaluate moral cognition in concrete scenarios related to each moral foundation. 

We characterise conflicts in values between these different abstractions of moral evaluation as hypocrisy. We found that both models displayed reasonable consistency within each instrument compared to humans, but they displayed contradictory and hypocritical behaviour when we compared the abstract values 
present in the MFQ to the evaluation of concrete moral violations of the MFV.

[ PDF ]

© Como citar este artigo:
Nunes, José Luiz; Guilherme Almeida; Marcelo de Araujo; Simone D. J. Barbosa. 2024. "Are large language models moral hypocrites? A study based on moral foundations". arXiv. https://doi.org/10.48550/ARXIV.2405.11100.