OAK

Finding deceivers in social context with large language models and how to find them: the case of the Mafia game

Metadata Downloads
Abstract
Lies are ubiquitous and often happen in social interactions. However, socially conducted deceptions make it hard to get data since people are unlikely to self-report their intentional deception behaviors, especially malicious ones. Social deduction games, a type of social game where deception is a key gameplay mechanic, can be a good alternative to studying social deceptions. Hence, we utilized large language models’ (LLMs) high performance in solving complex scenarios that require reasoning and prompt engineering to detect deceivers in the game of Mafia given only partial information and found such an approach acquired better accuracy than previous BERT-based methods in human data and even surpassed human accuracy. Furthermore, we conducted extensive experiments and analyses to find out the strategies behind LLM’s reasoning process so that humans could understand the gist of LLM’s strategy. © The Author(s) 2024.
Author(s)
Yoo, ByunghwaKim, Kyung-Joong
Issued Date
2024-12
Type
Article
DOI
10.1038/s41598-024-81997-5
URI
https://scholar.gist.ac.kr/handle/local/9155
Publisher
Nature Research
Citation
Scientific Reports, v.14, no.1
ISSN
2045-2322
Appears in Collections:
Department of AI Convergence > 1. Journal Articles
공개 및 라이선스
  • 공개 구분공개
파일 목록

Items in Repository are protected by copyright, with all rights reserved, unless otherwise indicated.