Memory Efficient Factored Abstraction for Reinforcement Learning


Sahin C., Cilden E., POLAT F.

IEEE 2nd International Conference on Cybernetics (CYBCONF), Gdynia, Poland, 24 - 26 June 2015, pp.18-23 identifier

  • Publication Type: Conference Paper / Full Text
  • City: Gdynia
  • Country: Poland
  • Page Numbers: pp.18-23
  • Keywords: reinforcement learning, factored MDP, learning abstractions, extended sequence tree

Abstract

Classical reinforcement learning techniques are often inadequate for problems with large state-space due to curse of dimensionality. If the states can be represented as a set of variables, it is possible to model the environment more compactly. Automatic detection and use of temporal abstractions during learning was proven to be effective to increase learning speed. In this paper, we propose a factored automatic temporal abstraction method based on an existing temporal abstraction strategy, namely extended sequence tree algorithm, by taking care of state differences via state variable changes. The proposed method has been shown to provide significant memory gain on selected benchmark problems.