Towards life-long adaptive agents: using metareasoning for combining knowledge-based planning with situated learning

Abstract

We consider task planning for long-living intelligent agents situated in dynamic environments. Specifically, we address the problem of incomplete knowledge of the world due to the addition of new objects with unknown action models. We propose a multilayered agent architecture that uses meta-reasoning to control hierarchical task planning and situated learning, monitor expectations generated by a plan against world observations, forms goals and rewards for the situated reinforcement learner, and learns the missing planning knowledge relevant to the new objects. We use occupancy grids as a low-level representation for the high-level expectations to capture changes in the physical world due to the additional objects, and provide a similarity method for detecting discrepancies between the expectations and the observations at run-time; the meta-reasoner uses these discrepancies to formulate goals and rewards for the learner, and the learned policies are added to the hierarchical task network plan library for future re-use. We describe our experiments in the Minecraft and Gazebo microworlds to demonstrate the efficacy of the architecture and the technique for learning. We test our approach against an ablated reinforcement learning (RL) version, and our results indicate this form of expectation enhances the learning curve for RL while being more generic than propositional representations.

Towards life-long adaptive agents: using metareasoning for combining knowledge-based planning with situated learning

Posted in .