Abstract
Imitation learning is a skill essential to human development and cognition [6, 5]. Naturally, imitation learning has become a topic of focus for robotics research as well, particularly in interactive robots [1, 2]. In imitating the actions of a teacher, a cognitive agent learns the demonstrated action such that it may perform a similar action later and achieve a similar goal. Thus, we expect that a cognitive robot that learns from imitation would reuse what it has learned from one experience to reason about addressing related, but different, problem scenarios.
The eventual goal of this work is to use a case-based approach to enable imitation learning in interactions such as the following. A human teacher guides the robot to complete a task, such as scooping the contents of one container into another. The robot records the demonstrated actions and observed objects, saving the demonstration as a source case in its case memory. At a later time, the robot is asked to repeat the scooping task, but in a new, target environment containing a different set of object features to parameterize and execute the task. Next, the robot would transfer its representation of the scooping task to accommodate for the differences between the source and target environments, and then execute an action based on the transferred representation to achieve the goal state in the target environment. Using a case-based framework to address this problem allows us to represent demonstrations as individual experiences in the robot’s case memory, and provides us with a framework for identifying, transferring, and executing a relevant source case demonstration in an unfamiliar target environment.