Qiaosi (Chelsea) Wang Passes PhD Defense

September, 13, 2024, DILab’s Qiaosi (Chelsea) Wang passed her PhD Defense of Dissertation! Chelsea presented her thesis on Mutual Theory of Mind for Human-AI communication which she presented at the CHI conference earlier this year. We are incredibly proud of all the effort and dedication Dr. Wang has put into her thesis and are looking forward to the extraordinary work she will accomplish in the future.

The Chronicle of Higher Education – Morehouse AI TAs

The Chronicle of Higher Education recently published an article on Morehouse College’s artificial TA – “a professor’s digital mini-me”. Within the article, DILab’s Jill Watson is briefly compared to Morehouse’s AI TA, creating the question: do students learn better if there is a visual representation of the professor versus just a chatbot? Food for thought to be sure.

DILab Seeks Research Assistants

The Design Intelligence Lab is looking for a few OMSCS students to work with us as research assistants starting in Fall 2024. DILab conducts research at the intersection of AI, Cognitive Science, and Learning Technology. At present, much of our work focuses on AI for learning and education (https://aialoe.org/).

The ongoing research projects include conversational courseware (for example, Jill Watson for interactive books), interactive videos (Ivy), AI social actors (SAMI), and personalization of learning in systems thinking (VERA). The research projects also include self-explanation, machine teaching, and theory of mind. In addition, we are working on a large project called Architecture for Learning (A4L). 

We have a long history of working with OMSCS students and have several in our lab at present. We love working with OMSCS students because typically they are very smart, very skilled, very motivated and very professional. We would like to work with OMSCS students who can commit to at least 10 hours of work a week for at least a year: it takes that kind of effort to make a contribution. This means that the candidate should be in the OMSCS program through the end of Summer 2025 or beyond. 

The responsibilities typically involve development of AI research software. We would prefer students who have taken at least one course in software engineering and at least one in AI as part of their OMSCS work.

The work can be done remotely. Unfortunately, there is no pay for these positions. However, you may register for credit or even do a three-semester long research project with me.

If you are interested, please contact my executive assistant Moriah Ugi (mugi3@gatech.edu) with a CV.

DILab Papers Accepted to L@S and EDM

Some Design and Intelligence Lab members have recently enjoyed success at having papers accepted to workshops associated with EDM (Educational Data Mining) 2024 and L@S (Learning@Scale) 2024, both chaired by DILab’s David Joyner and hosted by Georgia Tech’s Global Learning Center.

Combining Cognitive and Generative AI for Self-Explanation in Interactive AI Agents

Paper at EDM Workshop on Human-Centric eXplainable AI in Education (HEXED) – Shalini Sushri, Rahul Dass, Rhea Basappa, Hong Lu & Ashok Goel

How Do Students Interact with an LLM-Powered Virtual Teaching Assistant in Different Educational Settings?

Paper at EDM Workshop on Leveraging Large Language Models for Next-Generation Educational Technologies – Pratyusha Maiti & Ashok Goel

Does Jill Watson Enhance Teacher Presence?

Poster Paper at L@S – Robert Lindgren, Sandeep Kakar, Pratiusha Maiti, Karan Taneja & Ashok Goel

Engaging Learnersourcing with an AI Social Agent in Online Learning

Paper at L@S Workshop on Learnersourcing: Student Generated Content at ACM L@S 2024 – Jisu Kim & Ashok Goel

Ashok Goel in Interactive Computing’s June Newsletter

The Design Intelligence Lab’s Ashok Goel was featured in Georgia Tech School of Interactive Computing’s June Newsletter: ‘Friends’ Inspires AI Tool.

While discussing the advantages of AI personalization in education with EducationWeek, Goel makes the point that AI feedback “might be right for, say, a neurotypical child and maybe not right for a neuroatypical child”.

Four DILab Papers Accepted for Presentation!

Last week, four DILab team papers were accepted for presentation to the 20th International Conference on Intelligent Tutoring Systems in Greece in June 2024!

Jill Watson: VTA-GPT – A Conversational Virtual Teaching Assistant

AUTHORS: Sandeep Kakar, Pratyusha Maiti, Alekhya Nandula, Gina Nguyen, Karan Taneja, Aiden Zhao, Vrinda Nandan and Ashok Goel

SAMI: ABCD: An AI Actor for Fostering Social Interactions in Online Classrooms

AUTHORS: Sandeep Kakar, Rhea B, Ida Camacho, Christopher Griswold, Alex Houk, Chris Leung, Mustafa Tekman, Patrick Westervelt, Qiaosi Wang and Ashok Goel.

VERA: A Constructivist Framing of Wheel Spinning: Identifying Unproductive Behaviors with Sequence Analysis

AUTHORS: John Kos, Dinesh Ayyappan and Ashok Goel

Self-Explanation: Social AI Agents Too Need to Explain Themselves

AUTHORS: Rhea Basappa, Mustafa Tekman, Hong Lu, Benjamin Faught, Sandeep Kakar and Ashok Goel

Congratulations to all the authors!

SPECIAL ISSUE: AI Magazine: NSF’s National AI Institutes

On March 19, 2024, AAAI published the Special Issue if AI Magazine in National AI Institutes. The publication includes an Introduction by DILab’s Ashok Goel and AI-ALOE’s Chaohua Ou which describes the scheme for the organization of the 20 articles in the issue.

Within the Special Issue is also “AI-ALOE: AI for reskilling, upskilling, and workforce development” by Ashok Goel, Chris Dede, and Chaohua Ou. The article highlights how AI-ALOE is developing models and techniques to make AI assistants usable, learnable, teachable, and scalable.

New Publication: Explanation as Question Answering Based on User Guides

Congratulations to Vrinda Nandan, Spencer Rugaber, and Ashok Goel for the publication of their chapter, “Explanation as Question Answering based on User Guides”, in Explainable Agency in Artificial Intelligence: Research and Practice.

This book focuses on a subtopic of explainable AI (XAI) called explainable agency (EA), which involves producing records of decisions made during an agent’s reasoning, summarizing its behavior in human-accessible terms, and providing answers to questions about specific choices and the reasons for them. We distinguish explainable agency from interpretable machine learning (IML), another branch of XAI that focuses on providing insight (typically, for an ML expert) concerning a learned model and its decisions. In contrast, explainable agency typically involves a broader set of AI-enabled techniques, systems, and stakeholders (e.g., end users), where the explanations provided by EA agents are best evaluated in the context of human subject studies.

The chapters of this book explore the concept of endowing intelligent agents with explainable agency, which is crucial for agents to be trusted by humans in critical domains such as finance, self-driving vehicles, and military operations. This book presents the work of researchers from a variety of perspectives and describes challenges, recent research results, lessons learned from applications, and recommendations for future research directions in EA. The historical perspectives of explainable agency and the importance of interactivity in explainable systems are also discussed. Ultimately, this book aims to contribute to the successful partnership between humans and AI systems.