Tezin Türü: Yüksek Lisans
Tezin Yürütüldüğü Kurum: Orta Doğu Teknik Üniversitesi, Enformatik Enstitüsü, Bilişsel Bilimler Anabilim Dalı, Türkiye
Tezin Onay Tarihi: 2018
Öğrenci: EFECAN YILMAZ
Danışman: CENGİZ ACARTÜRK
Özet:The research in human-robot interaction (HRI) involve topics, such as interlocutor collaboration in joint action, deixis in HRI, or the properties of shared environments. Moreover, referring expressions are particularly studied in joint action from both expression generation and resolution perspectives. Selective visual attention in gaze interaction and saliency patterns are also active topics in HRI. The present thesis investigated in a virtual reality (VR) environment an HRI and joint action situation with the assistance of eye tracking in a head-mounted display device in order to explore the augmentation of non-verbal communication in HRI. For this purpose, we employed a multimodal approach in communication with both non-verbal deictic expressions (gaze) and explicit verbal references in a multi-robot agent, single human experiment setting. The number of robot agents varied during experiments in order to investigate the social robotics influence of this measure on our metrics. We also utilized two distinct robot agent designs to explore an interaction effect with the number of robot agents as we evaluated participants’ deixis resolution time and accuracy, as well as their gaze interaction patterns. The results of the research showed that the participants’ accuracy, gaze interaction, and response time in deixis resolution were significantly influenced by the varying number of robot agents. However, this effect was not present when the participants were presented with explicit verbal references. Participants’ gaze interaction results also showed that the number of robot agents significantly influence the saliency of the robot agents. Moreover, the participants interacted with the robot agents even when the joint task did not require gaze interaction.