Four DILab Papers Accepted for Presentation!

Last week, four DILab team papers were accepted for presentation to the 20th International Conference on Intelligent Tutoring Systems in Greece in June 2024!

Jill Watson: VTA-GPT – A Conversational Virtual Teaching Assistant

AUTHORS: Sandeep Kakar, Pratyusha Maiti, Alekhya Nandula, Gina Nguyen, Karan Taneja, Aiden Zhao, Vrinda Nandan and Ashok Goel

SAMI: ABCD: An AI Actor for Fostering Social Interactions in Online Classrooms

AUTHORS: Sandeep Kakar, Rhea B, Ida Camacho, Christopher Griswold, Alex Houk, Chris Leung, Mustafa Tekman, Patrick Westervelt, Qiaosi Wang and Ashok Goel.

VERA: A Constructivist Framing of Wheel Spinning: Identifying Unproductive Behaviors with Sequence Analysis

AUTHORS: John Kos, Dinesh Ayyappan and Ashok Goel

Congratulations to all the authors!

SPECIAL ISSUE: AI Magazine: NSF’s National AI Institutes

On March 19, 2024, AAAI published the Special Issue if AI Magazine in National AI Institutes. The publication includes an Introduction by DILab’s Ashok Goel and AI-ALOE’s Chaohua Ou which describes the scheme for the organization of the 20 articles in the issue.

Within the Special Issue is also “AI-ALOE: AI for reskilling, upskilling, and workforce development” by Ashok Goel, Chris Dede, and Chaohua Ou. The article highlights how AI-ALOE is developing models and techniques to make AI assistants usable, learnable, teachable, and scalable.

New Publication: Explanation as Question Answering Based on User Guides

Congratulations to Vrinda Nandan, Spencer Rugaber, and Ashok Goel for the publication of their chapter, “Explanation as Question Answering based on User Guides”, in Explainable Agency in Artificial Intelligence: Research and Practice.

This book focuses on a subtopic of explainable AI (XAI) called explainable agency (EA), which involves producing records of decisions made during an agent’s reasoning, summarizing its behavior in human-accessible terms, and providing answers to questions about specific choices and the reasons for them. We distinguish explainable agency from interpretable machine learning (IML), another branch of XAI that focuses on providing insight (typically, for an ML expert) concerning a learned model and its decisions. In contrast, explainable agency typically involves a broader set of AI-enabled techniques, systems, and stakeholders (e.g., end users), where the explanations provided by EA agents are best evaluated in the context of human subject studies.

The chapters of this book explore the concept of endowing intelligent agents with explainable agency, which is crucial for agents to be trusted by humans in critical domains such as finance, self-driving vehicles, and military operations. This book presents the work of researchers from a variety of perspectives and describes challenges, recent research results, lessons learned from applications, and recommendations for future research directions in EA. The historical perspectives of explainable agency and the importance of interactivity in explainable systems are also discussed. Ultimately, this book aims to contribute to the successful partnership between humans and AI systems.

Chelsea Wang: Theory of Mind on Human-AI Interaction

Congratulations to Qiaosi “Chelsea” Wang for acceptance of her proposal to the ACM CHI conference for a CHI 2024 Workshop on Theory of Mind on Human-AI Interaction!

The CHI conference on Human Factors in Computing Systems is the premier international conference of Human-Computer Interaction. The conference embraces the theme of Surfing the World – reflecting the focus on pushing forth the wave of cutting-edge technology and riding the tide of new developments in human-computer interaction. The conference serves as a platform for researchers, practitioners, and industry leaders to share their latest work and ideas and to foster collaboration and innovation in the field

The Frontier of Artificial Intelligence

On November 13, 2023 the U.S. National Science Foundation (NSF) published the latest episode of the NSF’s Discovery Files Podcast.

Joining the podcast are Aarti Singh from the AI Institute for Societal Decision Making; Amy McGovern from the AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography; Ashok Goel from the National AI Institute for Adult Learning and Online Education; Vikram Adve from the Artificial Intelligence for Future Agricultural Resilience, Management, and Sustainability Institute; and Michael Littman, division director for Information and Intelligent Systems in NSF’s Computer and Information Science and Engineering Directorate, to hear how these institutes will revolutionize the frontiers of AI and how society will benefit from these innovations

Chelsea Wang: June 8th Symposium on Mental Model in Human-AI Interaction

Our very own Qiaosi Wang (Chelsea) and Dr. Ashok Goel are hosting a symposium on Mental Model in Human-AI Interaction on June 8th, 2023. Check out the details below:

Symposium on Mental Model in Human-AI Interaction

As we interact with increasingly advanced AI systems, different mental models are constructed and adapted to facilitate, drive, and support the success of human-AI interactions.

What kinds of mental models are there? How do we leverage them in human-AI interaction? What are some sociotechnical considerations of mental models? Join us on June 8th for an online symposium on “Mental Model in Human-AI Interaction” to explore these questions with us. 

Date: Thursday, June 8th, 2023

Time: 3pm to 6pm Eastern Time

Location: Zoom (Register here to receive the zoom meeting link: https://gatech.zoom.us/meeting/register/tJMocOmrqj0uGdRgxezgtqaWESXdLWa–9Mx)

Contact: Chelsea (qswang@gatech.edu)

Ashok Goel: CogSci 2022

On July 29, Ashok Goel gave a presentation at The CogSci 2022 Cognitive Diversity Conference 2022.

https://docs.google.com/document/d/1dMG0B71HbJLCblqLA6E9Ys2gBQgMlckb/edit?usp=sharing&ouid=101146215278313978371&rtpof=true&sd=true.