The Triangles of Dishonesty: Modelling the Evolution of Lies, Bullshit, and Deception in Agent Societies

Abstract

Misinformation and disinformation in agent societies can be spread due to the adoption of dishonest communication. Recently, this phenomenon has been exacerbated by advances in AI technologies. One way to understand dishonest communication is to model it from an agent-oriented perspective. In this paper we model dishonesty games considering the existing literature on lies, bullshit, and deception, three prevalent but distinct forms of dishonesty. We use an evolutionary agent-based replicator model to simulate dishonesty games and show the differences between the three types of dishonest communication under two different sets of assumptions: agents are either self-interested (payoff maximizers) or competitive (relative payoff maximizers). We show that: (i) truth-telling is not stable in the face of lying, but that interrogation helps drive truth-telling in the self-interested case but not the competitive case; (ii) that in the competitive case, agents stop bullshitting and start truth-telling, but this is not stable; (iii) that deception can only dominate in the competitive case, and that truth-telling is a saddle point in which agents realise deception can provide better payoffs.

Publication
In Proc. of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2024) (pp. 1645–1653). International Foundation for Autonomous Agents and Multiagent Systems (IFAAMAS)