DILab Seeks Research Assistants

The Design Intelligence Lab is looking for a few OMSCS students to work with us as research assistants starting in Fall 2024. DILab conducts research at the intersection of AI, Cognitive Science, and Learning Technology. At present, much of our work focuses on AI for learning and education (https://aialoe.org/).

The ongoing research projects include conversational courseware (for example, Jill Watson for interactive books), interactive videos (Ivy), AI social actors (SAMI), and personalization of learning in systems thinking (VERA). The research projects also include self-explanation, machine teaching, and theory of mind. In addition, we are working on a large project called Architecture for Learning (A4L). 

We have a long history of working with OMSCS students and have several in our lab at present. We love working with OMSCS students because typically they are very smart, very skilled, very motivated and very professional. We would like to work with OMSCS students who can commit to at least 10 hours of work a week for at least a year: it takes that kind of effort to make a contribution. This means that the candidate should be in the OMSCS program through the end of Summer 2025 or beyond. 

The responsibilities typically involve development of AI research software. We would prefer students who have taken at least one course in software engineering and at least one in AI as part of their OMSCS work.

The work can be done remotely. Unfortunately, there is no pay for these positions. However, you may register for credit or even do a three-semester long research project with me.

If you are interested, please contact my executive assistant Moriah Ugi (mugi3@gatech.edu) with a CV.

DILab Papers Accepted to L@S and EDM

Some Design and Intelligence Lab members have recently enjoyed success at having papers accepted to workshops associated with EDM (Educational Data Mining) 2024 and L@S (Learning@Scale) 2024, both chaired by DILab’s David Joyner and hosted by Georgia Tech’s Global Learning Center.

Combining Cognitive and Generative AI for Self-Explanation in Interactive AI Agents

Paper at EDM Workshop on Human-Centric eXplainable AI in Education (HEXED) – Shalini Sushri, Rahul Dass, Rhea Basappa, Hong Lu & Ashok Goel

How Do Students Interact with an LLM-Powered Virtual Teaching Assistant in Different Educational Settings?

Paper at EDM Workshop on Leveraging Large Language Models for Next-Generation Educational Technologies – Pratyusha Maiti & Ashok Goel

Does Jill Watson Enhance Teacher Presence?

Poster Paper at L@S – Robert Lindgren, Sandeep Kakar, Pratiusha Maiti, Karan Taneja & Ashok Goel

Engaging Learnersourcing with an AI Social Agent in Online Learning

Paper at L@S Workshop on Learnersourcing: Student Generated Content at ACM L@S 2024 – Jisu Kim & Ashok Goel

Ashok Goel in Interactive Computing’s June Newsletter

The Design Intelligence Lab’s Ashok Goel was featured in Georgia Tech School of Interactive Computing’s June Newsletter: ‘Friends’ Inspires AI Tool.

While discussing the advantages of AI personalization in education with EducationWeek, Goel makes the point that AI feedback “might be right for, say, a neurotypical child and maybe not right for a neuroatypical child”.

Four DILab Papers Accepted for Presentation!

Last week, four DILab team papers were accepted for presentation to the 20th International Conference on Intelligent Tutoring Systems in Greece in June 2024!

Jill Watson: VTA-GPT – A Conversational Virtual Teaching Assistant

AUTHORS: Sandeep Kakar, Pratyusha Maiti, Alekhya Nandula, Gina Nguyen, Karan Taneja, Aiden Zhao, Vrinda Nandan and Ashok Goel

SAMI: ABCD: An AI Actor for Fostering Social Interactions in Online Classrooms

AUTHORS: Sandeep Kakar, Rhea B, Ida Camacho, Christopher Griswold, Alex Houk, Chris Leung, Mustafa Tekman, Patrick Westervelt, Qiaosi Wang and Ashok Goel.

VERA: A Constructivist Framing of Wheel Spinning: Identifying Unproductive Behaviors with Sequence Analysis

AUTHORS: John Kos, Dinesh Ayyappan and Ashok Goel

Self-Explanation: Social AI Agents Too Need to Explain Themselves

AUTHORS: Rhea Basappa, Mustafa Tekman, Hong Lu, Benjamin Faught, Sandeep Kakar and Ashok Goel

Congratulations to all the authors!

SPECIAL ISSUE: AI Magazine: NSF’s National AI Institutes

On March 19, 2024, AAAI published the Special Issue if AI Magazine in National AI Institutes. The publication includes an Introduction by DILab’s Ashok Goel and AI-ALOE’s Chaohua Ou which describes the scheme for the organization of the 20 articles in the issue.

Within the Special Issue is also “AI-ALOE: AI for reskilling, upskilling, and workforce development” by Ashok Goel, Chris Dede, and Chaohua Ou. The article highlights how AI-ALOE is developing models and techniques to make AI assistants usable, learnable, teachable, and scalable.

New Publication: Explanation as Question Answering Based on User Guides

Congratulations to Vrinda Nandan, Spencer Rugaber, and Ashok Goel for the publication of their chapter, “Explanation as Question Answering based on User Guides”, in Explainable Agency in Artificial Intelligence: Research and Practice.

This book focuses on a subtopic of explainable AI (XAI) called explainable agency (EA), which involves producing records of decisions made during an agent’s reasoning, summarizing its behavior in human-accessible terms, and providing answers to questions about specific choices and the reasons for them. We distinguish explainable agency from interpretable machine learning (IML), another branch of XAI that focuses on providing insight (typically, for an ML expert) concerning a learned model and its decisions. In contrast, explainable agency typically involves a broader set of AI-enabled techniques, systems, and stakeholders (e.g., end users), where the explanations provided by EA agents are best evaluated in the context of human subject studies.

The chapters of this book explore the concept of endowing intelligent agents with explainable agency, which is crucial for agents to be trusted by humans in critical domains such as finance, self-driving vehicles, and military operations. This book presents the work of researchers from a variety of perspectives and describes challenges, recent research results, lessons learned from applications, and recommendations for future research directions in EA. The historical perspectives of explainable agency and the importance of interactivity in explainable systems are also discussed. Ultimately, this book aims to contribute to the successful partnership between humans and AI systems.

Chelsea Wang: Theory of Mind on Human-AI Interaction

Congratulations to Qiaosi “Chelsea” Wang for acceptance of her proposal to the ACM CHI conference for a CHI 2024 Workshop on Theory of Mind on Human-AI Interaction!

The CHI conference on Human Factors in Computing Systems is the premier international conference of Human-Computer Interaction. The conference embraces the theme of Surfing the World – reflecting the focus on pushing forth the wave of cutting-edge technology and riding the tide of new developments in human-computer interaction. The conference serves as a platform for researchers, practitioners, and industry leaders to share their latest work and ideas and to foster collaboration and innovation in the field

The Frontier of Artificial Intelligence

On November 13, 2023 the U.S. National Science Foundation (NSF) published the latest episode of the NSF’s Discovery Files Podcast.

Joining the podcast are Aarti Singh from the AI Institute for Societal Decision Making; Amy McGovern from the AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography; Ashok Goel from the National AI Institute for Adult Learning and Online Education; Vikram Adve from the Artificial Intelligence for Future Agricultural Resilience, Management, and Sustainability Institute; and Michael Littman, division director for Information and Intelligent Systems in NSF’s Computer and Information Science and Engineering Directorate, to hear how these institutes will revolutionize the frontiers of AI and how society will benefit from these innovations

Chelsea Wang: June 8th Symposium on Mental Model in Human-AI Interaction

Our very own Qiaosi Wang (Chelsea) and Dr. Ashok Goel are hosting a symposium on Mental Model in Human-AI Interaction on June 8th, 2023. Check out the details below:

Symposium on Mental Model in Human-AI Interaction

As we interact with increasingly advanced AI systems, different mental models are constructed and adapted to facilitate, drive, and support the success of human-AI interactions.

What kinds of mental models are there? How do we leverage them in human-AI interaction? What are some sociotechnical considerations of mental models? Join us on June 8th for an online symposium on “Mental Model in Human-AI Interaction” to explore these questions with us. 

Date: Thursday, June 8th, 2023

Time: 3pm to 6pm Eastern Time

Location: Zoom (Register here to receive the zoom meeting link: https://gatech.zoom.us/meeting/register/tJMocOmrqj0uGdRgxezgtqaWESXdLWa–9Mx)

Contact: Chelsea (qswang@gatech.edu)