A heuristic temporal difference approach with adaptive grid discretization


Tezin Türü: Yüksek Lisans

Tezin Yürütüldüğü Kurum: Orta Doğu Teknik Üniversitesi, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümü, Türkiye

Tezin Onay Tarihi: 2016

Öğrenci: OZAN BORA FİKİR

Danışman: FARUK POLAT

Özet:

Reinforcement learning (RL), as an area of machine learning, tackle with the problem defined in an environment where an autonomous agent ought to take actions to achieve an ultimate goal. In RL problems, the environment is typically formulated as a Markov decision process. However, in real life problems, the environment is not flawless to be formulated as an MDP, and we need to relax fully observability assumption of MDP. The resulting model is partially observable Markov decision process, which is a more realistic model but forms a difficult problem setting. In this model agent cannot directly access to true state of the environment, but to the observations which provides a partial information about the true state of environment. There are two common ways to solve POMDP problems; first one is to neglect the true state of the environment and directly rely on the observations. The second one is to define a belief state which is probability distribution over the actual states. However, since the belief state definition is based on probability distribution, the agent has to handle with continuous space unlike MDP case, which may become intractable easily in autonomous agent perspective. In this thesis, we focus on belief space solutions and attempt to reduce the complexity of belief space by partitioning continuous belief space into well-defined and regular regions with two different types of grid discretization as an abstraction over belief space. Then we define an approximate.