Playtesting: What is Beyond Personas


Creative Commons License

Ariyurek S., SÜRER E. , BETİN CAN A.

IEEE Transactions on Games, 2022 (Refereed Journals of Other Institutions) identifier

  • Publication Type: Article / Article
  • Publication Date: 2022
  • Doi Number: 10.1109/tg.2022.3165882
  • Title of Journal : IEEE Transactions on Games
  • Keywords: Automated Playtesting, Games, Measurement, Neural networks, Optimization, Play Persona, Player Modeling, Q-learning, Reinforcement Learning, Training, Trajectory

Abstract

IEEEWe present two approaches to improve automated playtesting. First, we propose developing persona, which allows a persona to progress to different goals. In contrast, the procedural persona is fixed to a single goal. Second, a human playtester knows which paths she has tested before, and during the consequent tests, she may test different paths. However, Reinforcement Learning (RL) agents disregard these previous paths. We propose a novel methodology that we refer to as Alternative Path Finder (APF). We train APF with previous paths and employ APF during the training of a RL agent. APF modulates the reward structure of the environment while preserving the agent's goal. When evaluated, the agent generates a different trajectory that achieves the same goal. We use the General Video Game Artificial Intelligence and VizDoom frameworks to test our proposed methodologies. We use Proximal Policy Optimization RL agent during experiments. First, we compare the playtest data generated by developing and procedural persona. Our experiments show that developing persona provides better insight into the game and how different players would play. Second, we present the alternative paths found using APF and argue why traditional RL agents cannot learn those paths.