We present a new methodology that employs tester agents to automate video game testing. We introduce two types of agents —synthetic and human-like. Our agents are derived from Sarsa and MCTS but focus on finding defects while traditional game-playing agents focus on maximizing game scores. The synthetic agent uses test goals generated from game scenarios, and these goals are further modified to examine the effects of unintended game transitions. The human-like agent uses test goals extracted by our proposed multiple greedy-policy inverse reinforcement learning (MGP-IRL) algorithm from tester trajectories. We use our agents to produce test sequences, and run the game with these sequences. At each run, we use an automated test oracle to check for bugs. We compared the success of human-like and synthetic agents in bug finding, and evaluated the similarity between human-like agents and human testers. We collected 427 trajectories from human testers using the GVG-AI framework and created three testbed games with 12 levels that contain 45 bugs. Our experiments reveal that human-like and synthetic agents compete with human testers. We show that MGP-IRL increases human-likeness of agents while improving the bug finding performance.