Playtesting: What is Beyond Personas


Ariyurek S., Sürer E., Betin Can A.

IEEE TRANSACTIONS ON GAMES, cilt.15, sa.3, ss.348-359, 2023 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 15 Sayı: 3
  • Basım Tarihi: 2023
  • Doi Numarası: 10.1109/tg.2022.3165882
  • Dergi Adı: IEEE TRANSACTIONS ON GAMES
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED)
  • Sayfa Sayıları: ss.348-359
  • Anahtar Kelimeler: Automated playtesting, play persona, player modeling, reinforcement learning (RL)
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Playtesting is an essential step in the game design process. Game designers use the feedback from playtests to refine their designs. Game designers may employ procedural personas to automate the playtesting process. In this article, we present two approaches to improve automated playtesting. First, we propose developing persona, which allows a persona to progress to different goals. In contrast, the procedural persona is fixed to a single goal. Second, a human playtester knows which paths she has tested before, and during the consequent tests, she may test different paths. However, reinforcement learning (RL) agents disregard these previous paths. We propose a novel methodology that we refer to as alternative path finder (APF). We train APF with previous paths and employ APF during the training of an RL agent. APF modulates the reward structure of the environment, while preserving the agent's goal. When evaluated, the agent generates a different trajectory that achieves the same goal. We use the general video game artificial intelligence and VizDoom frameworks to test our proposed methodologies. We use proximal policy optimization RL agent during experiments. First, we compare the playtest data generated by developing and procedural persona. Our experiments show that developing persona provides better insight into the game and how different players would play. Second, we present the alternative paths found using APF and argue why traditional RL agents cannot learn those paths.