Manuel Preston De Miranda

Manuel Preston is a graduate student pursuing his Master’s in Computer Science with a focus on machine learning. He holds a Bachelor’s degree in Mechanical Engineering from the University of Colorado Boulder. Currently, Manuel serves as a graduate research assistant at DiLab, where he explores the integration of AI into modern applications with online students. With a background in designing and manufacturing optical systems and software for satellites, Manuel combines his technical expertise with innovative research to assist in the advancement of applying AI. 

Darby Hudnall

Darby Hudnall is a graduate research assistant and student at Georgia Tech, specializing in Interactive Intelligence. She holds a Bachelor of Science in Mechatronics Engineering from UNC Asheville. Darby currently works in industry as a software development engineer in test (SDET). In her free time, she enjoys volunteering with FIRST Robotics and the Society of Women Engineers (SWE). She is especially interested in research applying AI to cognitive science. At DILab, Darby works for the Architecture for Learning (A4L) team. You can learn more about Darby at

Samuel Taubman

Sam Taubman is an MSCS student at Georgia Tech specializing in Computing Systems. Currently, Sam works in the DILab on the SAMI project under Dr. Goel. Sam holds a B.S. in Chemistry from Appalachian State University and worked on oil seep drone air sampling methods under Dr. Swarthout. Sam’s research interests include utilizing AI for improving learning and applying AI to trading. Outside of research Sam enjoys cooking and training jiu-jitsu. LinkedIn:

Pamin Rangsikunpum

Pamin Rangsikunpum is a graduate student in computer science specializing in machine learning, with an interest in natural language processing (NLP), deep learning, and machine learning systems. He is a Graduate Research Assistant in the Design & Intelligence Lab at Georgia Institute of Technology. At DI Lab, he works with the VERA team on the Virtual Ecological Research Assistant project. Learn more here:

SPECIAL ISSUE: AI Magazine: NSF’s National AI Institutes

On March 19, 2024, AAAI published the Special Issue if AI Magazine in National AI Institutes. The publication includes an Introduction by DILab’s Ashok Goel and AI-ALOE’s Chaohua Ou which describes the scheme for the organization of the 20 articles in the issue.

Within the Special Issue is also “AI-ALOE: AI for reskilling, upskilling, and workforce development” by Ashok Goel, Chris Dede, and Chaohua Ou. The article highlights how AI-ALOE is developing models and techniques to make AI assistants usable, learnable, teachable, and scalable.

New Publication: Explanation as Question Answering Based on User Guides

Congratulations to Vrinda Nandan, Spencer Rugaber, and Ashok Goel for the publication of their chapter, “Explanation as Question Answering based on User Guides”, in Explainable Agency in Artificial Intelligence: Research and Practice.

This book focuses on a subtopic of explainable AI (XAI) called explainable agency (EA), which involves producing records of decisions made during an agent’s reasoning, summarizing its behavior in human-accessible terms, and providing answers to questions about specific choices and the reasons for them. We distinguish explainable agency from interpretable machine learning (IML), another branch of XAI that focuses on providing insight (typically, for an ML expert) concerning a learned model and its decisions. In contrast, explainable agency typically involves a broader set of AI-enabled techniques, systems, and stakeholders (e.g., end users), where the explanations provided by EA agents are best evaluated in the context of human subject studies.

The chapters of this book explore the concept of endowing intelligent agents with explainable agency, which is crucial for agents to be trusted by humans in critical domains such as finance, self-driving vehicles, and military operations. This book presents the work of researchers from a variety of perspectives and describes challenges, recent research results, lessons learned from applications, and recommendations for future research directions in EA. The historical perspectives of explainable agency and the importance of interactivity in explainable systems are also discussed. Ultimately, this book aims to contribute to the successful partnership between humans and AI systems.

Ashok Goel: CogSci 2022

On July 29, Ashok Goel gave a presentation at The CogSci 2022 Cognitive Diversity Conference 2022.

XPrize has selected Georgia Tech’s Veritas team for the round of 10 teams in the Digital Learning Challenge

The Veritas team is organized around the Virtual Ecological Research Assistant (VERA) developed by the Design & Intelligence Laboratory in Georgia Tech’s College of Computing. VERA is a virtual laboratory in which an AI research assistant enables learners to construct conceptual models of ecological phenomena and run interactive agent-based simulations of the models. It also allows access to the Smithsonian Institution’s Encyclopedia of Life. This affords learners to use large-scale domain knowledge to explore ecological systems and perform “what if” experiments to either explain an existing ecological system or predict the outcomes of changes to one. In addition, VERA enables researchers to conduct A/B experiments and supports them in analyzing the data.

The Veritas team is comprised of Georgia Tech faculty, staff, and students, including HCC Ph.D. student Sungeun An, OMSCS students Scott Bunin, Willventchy Celestine, Andrew Hornback, computing undergraduate student Stephen Buckley, Research Scientist Vrinda Nandan, and Dr. Emily Weigel in the School of Biological Sciences and Professor Ashok Goel in the School of Interactive Computing. Dr. Spencer Rugaber at Georgia Tech and Dr. Jennifer Hammock at Smithsonian Institution act as internal and external advisors, respectively. The XPrize Digital Learning Challenge started with around 300 teams and now has 10 teams left in the competition.