Positive impact of state similarity on reinforcement learning performance


Girgin S., Polat F., Alhaj R.

IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, vol.37, no.5, pp.1256-1270, 2007 (SCI-Expanded) identifier identifier identifier

  • Publication Type: Article / Article
  • Volume: 37 Issue: 5
  • Publication Date: 2007
  • Doi Number: 10.1109/tsmcb.2007.899419
  • Journal Name: IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.1256-1270
  • Keywords: action-value function, learning performance, optimal policies, reinforcement learning (RL), similarity function, state similarity
  • Middle East Technical University Affiliated: Yes

Abstract

In this paper, we propose a novel approach to identify states with similar subpolicies and show how they can be integrated into the reinforcement learning framework to improve learning performance. The method utilizes a specialized tree structure to identify common action sequences of states, which are derived from possible optimal policies, and defines a similarity function between two states based on the number of such sequences. Using this similarity function, updates on the action-value function of a state are reflected onto all similar states. This allows experience that is acquired during learning to be applied to a broader context. The effectiveness of the method is demonstrated empirically.