Our very own Qiaosi Wang (Chelsea) and Dr. Ashok Goel are hosting a symposium on Mental Model in Human-AI Interaction on June 8th, 2023. Check out the details below:
Symposium on Mental Model in Human-AI Interaction
As we interact with increasingly advanced AI systems, different mental models are constructed and adapted to facilitate, drive, and support the success of human-AI interactions.
What kinds of mental models are there? How do we leverage them in human-AI interaction? What are some sociotechnical considerations of mental models? Join us on June 8th for an online symposium on “Mental Model in Human-AI Interaction” to explore these questions with us.
Mukundan just graduated from MSCS this semester and will be pursuing his PhD degree in Computer Science at Northwestern University, starting Fall 2020. Mukundan worked on the Errol project during his time at DILab. Congratulations Mukundan! We are excited to see what the future holds for you 🙂
Varsha completed her MSCS degree from Georgia Tech in 2019. Varsha was a principle developer on the Jill Watson project during her time here at DILab. Congratulations to Varsha and we are excited to see what the future holds for you! 🙂
We have two papers accepted at Learning@Scale 2020. The first paper explores students’ perspectives on affect-sensitive technology in large educational context, led by PhD student Qiaosi Wang in collaboration with Shan Jing, Dr. David Joyner, Dr. Lauren Wilcox, Hong Li, Dr. Thomas Ploetz, and Dr. Betsy DiSalvo. The second paper is a collaboration project with Dr. David Joyner that explores social presence in online educational context, co-authors include Qiaosi Wang, Suyash Thakare, Shan Jing, Dr. Ashok Goel, and Dr. Blair MacIntyre.
New paper accepted at AIED 2020, led by PhD student Sungeun An, Co-authors Robert Bates, Jen Hammock, Spencer Rugaber, Emily Weigel and Ashok Goel. This paper explores scientific modeling using large scale knowledge.
Our work on designing and evaluating community-building virtual agent (Jill Watson Social Agent) is published at CHI2020 Late Breaking Work. This work is led by PhD student Qiaosi Wang, co-authors Shan Jing, Ida Camacho, David Joyner, and Ashok Goel. Congratulations to the team! This work can be viewed through ACM Digital Library: https://dl.acm.org/doi/abs/10.1145/3334480.3382878
Tesca Fitzgerald (http://www.tescafitzgerald.com/) passed her dissertation defense recently. Her work is titled “Human-Guided Task Transfer for Interactive Robots”. Tesca is now a a Postdoctoral Researcher in the Robotics Institute at Carnegie Mellon University.
Here is an abstract of Tesca’s dissertation work:
Adaptability is an essential skill in human cognition, enabling us to draw from our extensive, life-long experiences with various objects and tasks in order to address novel problems. To date, robots do not have this kind of adaptability, and yet, as our expectations of robots’ interactive and assistive capacity grows, it will be increasingly important for them to adapt to unpredictable environments in a similar manner as humans. While a robot can be pre-programmed for many tasks and their variations, specifying these behaviors would require tedious effort, and still would not adequately prepare a robot for every scenario it may encounter. Rather than require more demonstration data in order to attempt generalization across these variations, we leverage continued interaction with the teacher within the context of the new target task.
This approach first requires an understanding of how task differences, interaction, and transfer are related. We define a taxonomy of transfer problems that models the relationship between task differences and information requirements for transfer. Based on this taxonomy, we analyze a particular category of transfer problems in which the target environment contains new, unfamiliar objects. We present an interactive approach that enables the robot to learn the mapping between familiar sourceobjects and new target objects using assistance from a human teacher (provided by indicating the next object to be used at each step of the task). After a limited number of assists, our approach enables the robot to autonomously infer the objects used to complete the remainder of the task. Furthermore, we identify the effect of noisy feedback during interaction and present a confidence-guided approach to moderating the robot’s requests for assistance.
We then address a second category of transfer problems in which we replace the tool that the robot uses to manipulate other objects in the environment. For example, the robot may learn a scooping task using a spoon, and at a later time must transfer its task model to use a mug instead. We utilize interactive corrections to record the motion constraints imposed by the new tool, and then model the underlying relationship between the robot’s gripper and the new tooltip. Not only do we find that corrections are sufficient for the robot to model the new constraints afforded by the tool within the context of the corrected task, but the learned model can also be reused on other tasks that provide a similar context for that tool (e.g. in the tool surfaces used to execute the task).
Overall, this work enables a robot to address a wide variety of transfer problems without extensive demonstrations or domain-specific knowledge, and thus contributes toward a future of adaptive, collaborative robots.