Tezin Türü: Yüksek Lisans
Tezin Yürütüldüğü Kurum: Orta Doğu Teknik Üniversitesi, Enformatik Enstitüsü, Bilişsel Bilimler Anabilim Dalı, Türkiye
Tezin Onay Tarihi: 2015
Öğrenci: ECE KAMER TAKMAZ
Danışman: HÜSEYİN CEM BOZŞAHİN
Özet:Our prior plans supervise our actions and help us form new plans, which turn out to be the modified versions of our related previous plans. This is the process that we call “plan to learn”, a procedure involving self-supervision to learn more about the environment and extract meaning out of it. In cases where our prior plans fall short, it is not easy for us to produce efficient or complete plans. For instance, in a world very different from our own, with objects whose affordances appear to be false, encountering seemingly random situations prevent us from understanding our environment and how to act in it. If the differences are small enough, then we might be able to find patterns and adapt to that environment. However, if the distinctions are too large to make a meaning out of them, we can be stuck on such occasions, as we would not know how to make a reasonable plan in such a setting without supervision and without using our innate planning mechanism, thus we cannot “learn to plan”. In this thesis, these two processes, “plan to learn” and “learn to plan” in structured problem domains are compared. To this end, we conducted two experiments using a video game involving object interaction and in the light of the outcomes, we developed a computer model that uses prior plans containing affordances to learn about the environment and to update its knowledge of the world.