Mitigating Privacy Conflicts with Computational Theory of Mind


Erdogan E., AYDIN H., Dignum F., Verbrugge R., Yolum P.

24th International Conference on Autonomous Agents and Multiagent Systems, AAMAS 2025, Michigan, United States Of America, 19 - 23 May 2025, pp.695-703, (Full Text) identifier

  • Publication Type: Conference Paper / Full Text
  • City: Michigan
  • Country: United States Of America
  • Page Numbers: pp.695-703
  • Keywords: Human-Centered AI, Multi-Party Privacy, Theory of Mind
  • Middle East Technical University Affiliated: Yes

Abstract

Multiagent systems bring together agents that represent different users with possibly different concerns. When interacting to make decisions, conflicts occur. A well-known case is with privacy. Agents often need to manage the privacy of content that belong to multiple users, such as sharing group pictures on social media. When agents have different expectations on how the content should be shared, multi-party privacy conflicts can arise. How should we design agents to deal with such conflicts? We have studied an empirical user study to understand the effect of group dynamics in various multi-party privacy settings. Our findings show that as users' beliefs and knowledge about others evolve, privacy expectations shift as well. Inspired by this, we propose computational agents that mimic a human-inspired Theory of Mind (ToM) model to help their users preserve their privacy in multi-party privacy conflicts. The agents can express empathy when others are in need but can also fight for their own privacy. We evaluate our approach in multiagent simulations with varying decision-making strategies. Our results demonstrate that ToM-enabled agents improve privacy preservation for all parties, and even more when their understanding of others is dynamically updated through learning.