SPECIAL ISSUE: AI Magazine: NSF’s National AI Institutes

On March 19, 2024, AAAI published the Special Issue if AI Magazine in National AI Institutes. The publication includes an Introduction by DILab’s Ashok Goel and AI-ALOE’s Chaohua Ou which describes the scheme for the organization of the 20 articles in the issue.

Within the Special Issue is also “AI-ALOE: AI for reskilling, upskilling, and workforce development” by Ashok Goel, Chris Dede, and Chaohua Ou. The article highlights how AI-ALOE is developing models and techniques to make AI assistants usable, learnable, teachable, and scalable.

New Publication: Explanation as Question Answering Based on User Guides

Congratulations to Vrinda Nandan, Spencer Rugaber, and Ashok Goel for the publication of their chapter, “Explanation as Question Answering based on User Guides”, in Explainable Agency in Artificial Intelligence: Research and Practice.

This book focuses on a subtopic of explainable AI (XAI) called explainable agency (EA), which involves producing records of decisions made during an agent’s reasoning, summarizing its behavior in human-accessible terms, and providing answers to questions about specific choices and the reasons for them. We distinguish explainable agency from interpretable machine learning (IML), another branch of XAI that focuses on providing insight (typically, for an ML expert) concerning a learned model and its decisions. In contrast, explainable agency typically involves a broader set of AI-enabled techniques, systems, and stakeholders (e.g., end users), where the explanations provided by EA agents are best evaluated in the context of human subject studies.

The chapters of this book explore the concept of endowing intelligent agents with explainable agency, which is crucial for agents to be trusted by humans in critical domains such as finance, self-driving vehicles, and military operations. This book presents the work of researchers from a variety of perspectives and describes challenges, recent research results, lessons learned from applications, and recommendations for future research directions in EA. The historical perspectives of explainable agency and the importance of interactivity in explainable systems are also discussed. Ultimately, this book aims to contribute to the successful partnership between humans and AI systems.

Ashok Goel: CogSci 2022

On July 29, Ashok Goel gave a presentation at The CogSci 2022 Cognitive Diversity Conference 2022.

https://docs.google.com/document/d/1dMG0B71HbJLCblqLA6E9Ys2gBQgMlckb/edit?usp=sharing&ouid=101146215278313978371&rtpof=true&sd=true.

XPrize has selected Georgia Tech’s Veritas team for the round of 10 teams in the Digital Learning Challenge

https://www.xprize.org/challenge/digitallearning/competing-teams

The Veritas team is organized around the Virtual Ecological Research Assistant (VERA) developed by the Design & Intelligence Laboratory in Georgia Tech’s College of Computing. VERA is a virtual laboratory in which an AI research assistant enables learners to construct conceptual models of ecological phenomena and run interactive agent-based simulations of the models. It also allows access to the Smithsonian Institution’s Encyclopedia of Life. This affords learners to use large-scale domain knowledge to explore ecological systems and perform “what if” experiments to either explain an existing ecological system or predict the outcomes of changes to one. In addition, VERA enables researchers to conduct A/B experiments and supports them in analyzing the data.

The Veritas team is comprised of Georgia Tech faculty, staff, and students, including HCC Ph.D. student Sungeun An, OMSCS students Scott Bunin, Willventchy Celestine, Andrew Hornback, computing undergraduate student Stephen Buckley, Research Scientist Vrinda Nandan, and Dr. Emily Weigel in the School of Biological Sciences and Professor Ashok Goel in the School of Interactive Computing. Dr. Spencer Rugaber at Georgia Tech and Dr. Jennifer Hammock at Smithsonian Institution act as internal and external advisors, respectively. The XPrize Digital Learning Challenge started with around 300 teams and now has 10 teams left in the competition.

Get Involved

Date: Mar 11, 2021

Job Description: The is a full-time post-doctoral Research Scientist I position in the School of Interactive Computing at Georgia Tech. The Research Scientist will focus on designing and developing AI techniques and tools for supporting online and blended learning, and for deploying and evaluating them at scale within Georgia Tech. We are especially interested in further developing, deploying, and analyzing extant AI technologies for intelligent tutors, conversational agents, question answering,  social interactions, virtual laboratories, and virtual librarians. In particular, the Research Scientist will work with the faculty and students in Georgia Tech’s Design & Intelligence Laboratory to help develop these technologies and work with faculty and students across Georgia Tech to deploy and evaluate them.
This position requires expertise in AI as well as human learning and education. It also requires skills in software development, project management, data collection and analysis, documentation, as well as working with people. 

Contact: Professor Ashok Goel at goel@cc.gatech.edu

PAIP-Python

Python implementations of some of the classic AI programs from Peter Norvig’s fantastic textbook “Paradigms of Artificial Intelligence Programming.”

Visit the main PAIP-Python site here.

This is meant to be a learning resource for beginning AI programmers. Although PAIP is a fantastic book, it is no longer common for students to have a background in Lisp programming, as many universities have replaced Lisp with other languages in introductory programming and introductory artificial intelligence courses. It is my hope that making the programs from PAIP available in a commonly-taught language will provide a useful hands-on resource for beginning AI students.

These programs were written by Daniel Connelly at Georgia Tech as an independent project supervised by Professor Ashok Goel.