IEEE 29th International Conference on Tools with Artificial Intelligence (ICTAI), Massachusetts, Amerika Birleşik Devletleri, 6 - 08 Kasım 2017, ss.1-5
In the reinforcement learning context, subgoal discovery methods aim to find bottlenecks in problem state space so that the problem can naturally be decomposed into smaller subproblems. In this paper, we propose a concept filtering method that extends an existing subgoal discovery method, namely diverse density, to be used for both fully and partially observable RL problems. The proposed method is successful in discovering useful subgoals with the help of multiple instance learning. Compared to the original algorithm, the resulting approach runs significantly faster without sacrificing the solution quality. Moreover, it can effectively be employed to find observational bottlenecks of problems with perceptually aliased states.