Memory Efficient Factored Abstraction for Reinforcement Learning


Sahin C., Cilden E., POLAT F.

IEEE 2nd International Conference on Cybernetics (CYBCONF), Gdynia, Polonya, 24 - 26 Haziran 2015, ss.18-23 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Basıldığı Şehir: Gdynia
  • Basıldığı Ülke: Polonya
  • Sayfa Sayıları: ss.18-23
  • Anahtar Kelimeler: reinforcement learning, factored MDP, learning abstractions, extended sequence tree
  • Orta Doğu Teknik Üniversitesi Adresli: Evet

Özet

Classical reinforcement learning techniques are often inadequate for problems with large state-space due to curse of dimensionality. If the states can be represented as a set of variables, it is possible to model the environment more compactly. Automatic detection and use of temporal abstractions during learning was proven to be effective to increase learning speed. In this paper, we propose a factored automatic temporal abstraction method based on an existing temporal abstraction strategy, namely extended sequence tree algorithm, by taking care of state differences via state variable changes. The proposed method has been shown to provide significant memory gain on selected benchmark problems.