Proceedings of the Annual Meeting of the Cognitive Science Society, Berlin, Almanya, 31 Temmuz - 03 Ağustos 2013, ss.3604-3609
This study extends the learning and use of affordances on
robots on two fronts. First, we use the very same affordance
learning framework that was used for learning the affordances
of inanimate things to learn social affordances, that is affordances whose existence requires the presence of humans. Second, we use the learned affordances for making multi-step
plans. Specifically, an iCub humanoid platform is equipped
with a perceptual system to sense objects placed on a table, as
well as the presence and state of humans in the environment,
and a behavioral repertoire that consisted of simple object manipulations as well as voice behaviors that are uttered simple
verbs. After interacting with objects and humans, the robot
learns a set of affordances with which it can make multi-step
plans towards achieving a demonstrated goal.