Blog

DILab at UNESCO’s Digital Learning Week

The Design Intelligence Lab is thrilled to announce that one of its projects, Jill Watson, was selected for presentation at UNESCO’s Digital Learning Week, held from September 2-5, 2024 at UNESCO Headquarters in Paris, France. On September 4th, DILab member Pratyusha Maiti showcased Jill Watson during a breakout session on system-level and teacher-facing generative AI tools, moderated by Mr. Saurabh Roy, Senior Project Officer, Section for Teacher Development at UNESCO.

The presentation highlighted the innovative deployment of Jill Watson, a virtual teaching assistant powered by OpenAI’s ChatGPT 3.5 Turbo. Designed for use in higher education, technical/vocational training, and adult learners in online education, Jill Watson addresses key challenges such as self-directed learning and the need for a strong teaching presence in online environments. The tool integrates with Learning Management Systems (LMS) and employs retrieval-augmented generation (RAG) to provide accurate, contextually relevant responses based on course materials. Jill has been deployed across several courses and institutions and serves to provide an engaging learning environment for learners of varying demography, fostering deeper student engagement, supporting independent learning, and enhancing the overall educational experience. 

Discussions at the Digital Learning Week explored how digital tools can align with climate-friendly educational practices, emphasizing a “twin transition” towards greener, more human-centered learning environments. Discussions also included global AI regulations, with contrasting perspectives from the US and the EU, and the development of AI competency frameworks aimed at fostering critical thinking, inclusivity, and collaboration among learners and educators. The event also spotlighted challenges, including connectivity gaps, cultural shifts in education, and the ethical considerations surrounding AI’s use in marginalized communities. UNESCO’s ongoing efforts, like the Gateways Initiative, highlighted the global push to ensure equitable access to digital learning and sustainable educational practices​.


ToMinHAI 2024: 1st Workshop on Theory of Mind in Human-AI Interaction

by Qiaosi (Chelsea) Wang (Georgia Institute of Technology, US), Sarah Walsh (Georgia Institute of Technology, US), Mei Si (Rensselaer Polytechnic Institute, US), Jeffrey O. Kephart (IBM Research AI, US), Justin Weisz (IBM Research AI, US), Ashok Goel (Georgia Institute of Technology, US)

“How does trust build between people? It is an offering and a receiving… It is the reaching out between people, laughing at the same moment. It is building a model of the other person inside yourself, placing them in the palm of your hand, rotating them and saying: Yes, I see the flaws… And it is saying: I would rather trust you than be alone.” — Naomi Alderman, The Future

In psychology, Theory of Mind (ToM) refers to peoples’ ability to attribute mental states such as knowledge, emotions, goals, and beliefs to oneself and others. ToM helps us understand that these mental states may differ from our own. Given the increasing popularity of large language model (LLM)-based conversational agents, this concept has become highly relevant in both the human-computer interaction and machine learning communities.

Theory of Mind plays a fundamentally important role in human social interactions and many researchers have been working on methods to equip AI with an equivalent capability. The hope is to build highly-social, intelligent AI agents that can communicate with people on their level. Simultaneously, researchers are also interested in how people perceive conversational AI systems and their tendencies to attribute mental states, emotions, and intentions to it. These two perspectives on theory of mind are forming an emerging paradigm of Mutual Theory of Mind (MToM) in human-AI interaction in which both the human and AI each possess some level of ToM-like capabilities.

We recently held the first workshop on Theory of Mind in Human-AI Interactions at CHI 2024 to bring together different perspectives on ToM in human-AI interaction and define a unifying research agenda on the human-centered development of Mutual Theory of Mind (MToM). Our workshop focused on three core questions:

  • How to design and build a ToM-like capability for AI systems?
  • How to understand and shape peoples’ mental models of AI?
  • What are the consequences of building mutual theory of mind in human-AI interaction?

These questions were addressed across 15 papers presented in three sessions, as well as in 8 posters, a panel discussion, and a group activity to identify grand challenges in Theory of Mind for human-AI interaction.

Challenges, Opportunities, and Directions of Theory of Mind in HAI (Paper Session I)

Although Theory of Mind (ToM) is a well-known idea in the cognitive and social sciences, its use as a theoretical lens to study human-AI interaction is not yet well established. Human-centered AI researchers are still developing a shared understanding of foundational issues such as what precisely is a “Theory of Mind” in the context of AI systems and what difference it makes in the design of human-AI interactions. The first session comprised five papers that addressed some of the foundational issues in Theory of Mind in human-AI interaction:

Pafla et al. consider three accounts of ToM common in the cognitive and social sciences:

  • Theory Theory, where humans develop a theory of mind to understand the mental states of others,
  • Simulation Theory, where humans imagine themselves to be in the situation of the other when understanding their mental states, and
  • Perception Theory, where perception directly indicates the mental states of others without requiring inferencing.

They note the contradictions between these three accounts and ask for a resolution of the contradictions. They align their paper with Perception Theory, but to resolve the contradictions, they extend it to include “smart” perception in which the mind constructs the subjective experience of mental states based on context. The authors argue that this extended “smart” Perception Theory has four major implications: negotiation of reference between agents, emergence of ToM from social interactions rather than pre-definition, a focus on non-conceptual intentions, and flexible representations of objects. It is interesting to note that this characterization of ToM questions the notion of ground truth in AI because instead of being pre-defined, truth now emerges out of social interactions.

Wang and Goel propose a framework for the development of a mutual theory of mind within human-AI communication. This framework emphasizes the importance of both the content of each party’s models and the processes by which those models are constructed and recognized.

Regarding content, a mutual theory of mind is characterized by three elements:

  • interpretation, in which humans and AI agents can each construct and revise their interpretations of each other based on feedback from the other party,
  • feedback based on interpretations of each other, and
  • mutuality, in which humans and AI agents mutually shape each other’s interpretations of one another through feedback.

Regarding process, the framework suggests that a mutual theory of mind develops in three stages:

  • AI’s Construction of its ToM,
  • User’s Recognition of AI’s ToM, and
  • AI’s Revision of its ToM.
The three stages of development of MToM: Construction, Recognition, and Revision. Figure reproduced from Wang & Goel (2024).

Throughout these three stages, the three content elements of MToM — interpretation, feedback, and mutuality — interact with each other to shape the communications between the human and the AI. In their paper, Wang and Goel briefly describe two empirical studies pertaining to the construction and recognition stages. They conclude by proposing a research agenda for investigating the three stages in the development of MToM.

Ackerman and Shihadeh take a provocative stance that large AI models, such as large language models (LLMs), can be viewed as “a portal to humanity’s collective mind,” reflecting humanity’s collective unconsciousness, including its fallacies and biases. They posit that large AI models are a manifestation of Carl Jung’s notion of a “Collective Unconsciousness,” the idea that humanity shares a collective psyche that encompasses our highest virtues and our deepest prejudices. Ackerman and Shihadeh argue that framing large AI models as representations of our collective unconsciousness offers us the opportunity to reflect upon the “darker aspects” of our collective society and of ourselves. By framing human-AI interaction as engaging with humanity’s collective mind, individuals might recognize that the biases seen in the AI are also found in themselves. This perspective may foster deeper emotional engagement by encouraging individuals to view the computer not as a machine or a human but as a vessel for humanity’s collective subconscious.

In her paper, Street asserts that if LLMs possess a ToM, or if they acquire one, the potential impact could be profound, especially in regard to the alignment problem — how we design and deploy AI systems that behave in accordance with human values. She suggests that at the level of individual users, LLMs with ToM could support human-AI interaction in three ways:

  • Goal specification, by taking the potentially ambiguous goals of humans and defining them in a way that an AI system can achieve them,
  • Conversational adaptation, by tailoring what an AI system says and how it says it on the basis of the inferred mental states of their human interlocutors, and
  • Empathy and anthropomorphism, by facilitating a deeper understanding of users and providing more empathetic responses to them.

Looking beyond individual interactions, LLMs with ToM may facilitate group interactions as well through:

  • Collective alignment, by aligning LLM outputs with social values,
  • Fostering cooperation and competition among humans within the group, and by
  • Assisting groups with moral reasoning and collective decision making.

Street suggests that research agendas for LLMs with a ToM be inclusive of interactions at both the individual and group levels.

Finally, Weisz et al. presented three design fictions that probe the potential consequences of operationalizing a MToM between human users and one or more AI agents. The first design fiction explores a utopian vision of an operationalized MToM in which an AI agent is capable of learning about a user and predicting their behavior. This story highlights the beneficial outcomes that MToM may bring to workers within an organization and how MToM might shape the future of work:

  • helping us identify and focus on the tasks we truly enjoy,
  • providing a buffer from coworkers to improve our ability to achieve and maintain flow,
  • proactively filling in knowledge gaps,
  • improving social connectedness, and
  • helping us focus on our higher-level work goals.

The second and third design fictions investigate dystopian visions of operationalizing MToM. In the second fiction, a human interacts with a collection of bots, where each bot constructs its own user model of the human based on the purpose of the particular bot. This story considers cases in which bots with different user models make unclear transitions and exchange incomplete information about the human and their needs. In the third fiction, the AI’s model of the human user is so good that the human comes to completely rely on the AI agent and then applies the AI to a domain with which it is unfamiliar. Weisz et al. conclude their paper with a discussion of several research issues the above design fictions raise, including the need for predictive models of human users and the importance of explanations for helping users calibrate their trust with AI agents.

Theory of Mind in Human-AI Collaboration (Paper Session II)

An agent is an effective collaborator if, through its interaction with humans, it improves the speed with which human objectives are attained or the degree to which they are realized. The papers in this session addressed how an AI agent could infer a person’s intent or mental state and leverage that understanding in a helpful way. These papers offered techniques for agents to infer human intents through a combination of observed actions, utterances, and physical manipulations. They also examined methods for agents to communicate their belief states about humans and examined how that affects the human’s performance on a task. Finally, one paper offered practical heuristics and algorithms for improving the feasibility of real-time collaboration given computational constraints.

Although the papers are listed above in the order of presentation, we begin our synopsis with the work of Shi et al., which focused on inferring intent by observing a series of actions in an environment. They demonstrated an approach that combines the ability of a deep neural network (DNN) to handle complex action spaces with the ability of Bayes methods to represent and cope with uncertainty. As shown below, they trained a DNN to take a sequence of human actions and produce a probability distribution of next possible actions. They then fed those distributions to a separate Bayesian model to generate a probability distribution over human intentions.

Figure reproduced from Shi et al. (2024)

Shi et al. tested their technique on two data sets and found that, in most cases, the true intent was identified as the most likely one, especially as the length of the observed action sequence was increased.

Zhi-Xuan et al. explored the idea of predicting human intent even further by considering both actions and linguistic utterances. Their technique, called Cooperative Language-Guided Inverse Plan Search (CLIPS), models humans as cooperative planners who communicate joint plans through language that may be somewhat ambiguous in nature. The authors evaluated their technique in a “Doors, Keys & Gems” environment in which a human and an AI agent (depicted as a robot) move through a maze to collect gems. The human is able to delegate tasks to the robot, such as fetching the differently-colored keys to open the doors.

In the example below, when the agent only observes the movement of the human’s figure (depicted with arrows within the tiles), it might infer that the human is moving toward the blue door, concluding that it should fetch the blue key. However, when the human says “Can you pass me the red key?,” the agent is able to combine this instruction with its observation of the human’s movement and infer that the human’s plan is to open the red door.

Figure reproduced from Zhi-Xuan et al. (2024)

The authors demonstrated through ablation studies that combining actions with utterances is far more accurate in predicting the human’s plans than using either modality alone. Interestingly, CLIPS even outperformed human observers who attempted to infer a player’s intent from their actions and utterances.

AI agents can also proactively initiate communications to improve human-agent collaboration. Ying and Gajos ran a study using a simulated grocery shopping scenario where participants were asked to shop for ingredients to make a specific recipe, and an AI agent observed the ingredients selected by the participant and inferred what recipe the participant was attempting to make. Their results showed that when the agent conveyed its belief about the participant’s goal and how confident it was in that belief, participants were more likely to accept the agent’s ingredient recommendations. Thus, when the agent was transparent about its beliefs, participants were able to calibrate their trust in its recommendations. The authors examined two different ways for the agent to communicate its beliefs: a “Show” approach in which the assistant conveyed its belief using words (e.g. “I’m certain that you are shopping for chicken soup”) and a “Tell” approach in which the assistant conveyed its belief using numbers (e.g. “Chicken soup, confidence 80%”). The authors reported that both forms led to improved trust calibration, although the “Tell” approach also reduced the amount of time to complete the shopping task.

Another way to convey one’s mental state is through the physical manipulation of objects. Sidji et al. conducted studies of people engaging in Hanabi, a cooperative game involving partial information and restricted communication. The authors used eye-tracking footage of people playing Hanabi and found that players externalized their intentions and beliefs by rotating, reordering, and reconfiguring the cards held in their hand. Thus, physical orientation created a side-channel to convey one’s beliefs to other players (and to one’s self, as a memory aid). This work raises an interesting prospect for multimodal agents (especially robots that have a physical body) to observe and interpret non-verbal human actions as an additional mode of communication.

An important practical question is whether techniques used to infer and act upon human intent can be applied in real time in complex real-world environments. Schröder and Kopp pointed out that approaches such as LLMs and Bayesian Theory of Mind (BToM) are too slow to be viable in online settings. Instead, they proposed an approach that entails action-driven and resource-sensitive inference of a human’s mental state. In this approach, an AI agent has access to a variety of BToM models of varied cost and fidelity, and strategically switches amongst the models based on computational availability and time sensitivity. They also focus the models to consider only those potential actions that are most likely to be relevant on the basis of spatial proximity or shared artifacts. To test these ideas, the authors are developing an agent architecture that supports human-agent collaboration in a multi-player video game called “Overcooked!”, in which each player controls a chef and attempts to collaboratively prepare meals in response to incoming orders.

Theory of Mind for Socially Intelligent AI (Paper Session III)

Theory of Mind (ToM) refers to attributing mental states such as intentions, goals, emotions, and beliefs to oneself and others. This capability is essential for understanding and predicting behavior in social interactions. In the context of AI, ToM enables machines to comprehend, predict, and respond to the mental states of humans and other AI agents, thereby enhancing their social intelligence. The five papers in Session III collectively explored applications of ToM in human-AI interaction. They examined how AI systems can be designed to better understand and respond to human mental states, thereby improving communication, collaboration, and personalized support during learning, life transitions, and everyday activities. Additionally, the papers touched upon the broader implications of artificial minds and the role of ToM in shaping interactions between children and social interactive agents.

Doll and Si delve into the significance of rapport in human-AI interactions, emphasizing the integration of Theory of Mind (ToM) to improve these interactions. The paper argues that AI systems equipped with ToM not only foster better communication but also enhance collaborative efforts. Through a comprehensive review of both human-human and human-AI relationship studies, the authors highlight how empathetic and adaptive responses from AI can establish a deeper connection and trust with users. Rapport is crucial not only in personal and professional domains but also in educational settings, where AI’s understanding of human emotions and intentions plays a pivotal role in facilitating effective interactions and achieving shared goals in mixed human-AI teams.

Wang, Li, Zhou, and Goel explore how Theory of Mind (ToM) can be used to personalize AI assistance during significant life transitions such as career changes or retirement. The paper examines the integration of ToM capabilities in AI to understand and support individuals’ emotional and psychological needs during these periods. It highlights the expansive potential of AI to infer mental states and align its operations with users’ needs, offering more natural and intuitive interactions. Despite this potential, the paper stresses the fragmentation in current applications and the necessity for a comprehensive exploration to guide AI development that genuinely understands and responds to the intricate web of human mental states.

Asthana and Collins-Thompson present a literature review focused on applying Theory of Mind (ToM) to educational settings, with a specific emphasis on generative AI. They discuss how AI can be designed to better understand and adapt to students’ learning styles and mental states, thereby improving educational outcomes. Their review emphasizes the potential of generative AI to effectively diagnose and address students’ knowledge gaps. Enhanced ToM capabilities in AI could significantly improve personalized learning experiences, offering dynamic and responsive educational support tailored to individual students’ cognitive and emotional needs. The paper also highlights future opportunities for developing AI tools that can perform complex educational tasks with greater autonomy and adaptability.

The presentation “Weaving a Theory of Artificial Minds” by Bharadwaj and Dubé expands on the concept of a Theory of Artificial Minds (ToAM), drawing from extensive literature on children’s interactions with AI and robots. Studies consistently show that children anthropomorphize AI systems by attributing human-like emotions, intentions, and personalities to them, especially when these systems have human-like voices and interactive features. Younger children tend to view AI systems as “alive” or possessing life-like qualities, though this perception diminishes as they grow older and their understanding of biological life becomes more sophisticated. According to both Media Equation Theory and the Computers as Social Actors (CASA) paradigm, people, including children, treat computers and AI as social entities and respond to them as they would to humans. Many studies have observed children showing emotional and social engagement with AI that exhibits social cues. Further, some studies suggest that children perceive AI systems as a distinct ontological category, neither fully human nor fully machine, indicating that engagement levels vary based on how children categorize the AI entities they interact with. These studies highlight the importance of Theory of Mind (ToM), as children with more advanced ToM skills are better able to navigate and utilize AI systems effectively. Based on their literature review, Bharadwaj and Dubé identify the need for a dedicated theoretical model of how humans understand AI systems. Their proposed Theory of Artificial Minds (ToAM) framework seeks to integrate insights across various fields to explain how children and adults perceive and interact with AI and guide the development of AI systems that are more attuned to human social norms and expectations.

Finally, Arbelo et al. examine young children’s interactions with Social Interactive Agents (SIAs) in educational settings, including AlexaMIKO, and a conversational virtual assistant they developed called Puntal. Utilizing a qualitative case study approach, the researchers observed how these technologies influenced the learning and engagement of early childhood education students from two schools in Tenerife, Spain. They found that Puntal was particularly effective due to its versatile functionality, including an automatic translation feature that facilitated inclusive communication in multicultural classrooms. Their study highlights the significant potential of SIAs to enhance educational experiences by providing interactive and tailored educational support.

Alexa, Puntal and MIKO. Figure reproduced from Arbelo et al. (2014).

Poster Session

The poster session consisted of preliminary research efforts that spanned all of the topics of the workshop. Six posters were presented in this session and authors presented their work in 90 second lightning talks.

Rogers and Scott presented their ideas on how predictive brain theory explains how humans and AI agents may establish a mutual theory of mind. . Predictive brain theory posits that the predictive brain aims to reduce the number of surprises it encounters. When a person (or AI agent) makes an error in predicting the behavior of another entity, they may either update their model of the world or take an action so that the world better fits their model. They offered a small case study showing how the two possibilities may manifest in a human-AI interaction when a predictive error is made. In the first case, a ToM-enabled model may ask the user follow-up questions to better understand their beliefs and goals. In the second, the AI agent asserts that its model of the user is accurate, which comes across as arrogant and undermines the user’s own understanding of themselves.

Otenen discussed the critical role of developing Mutual Theory of Mind in memory-aiding technologies. She outlined three design directions:

  1. The need to understand a human’s theory of mind of AI, as it influences the content and length of the human-AI interaction;
  2. The need to design human-AI interactions for people of varying levels of ToM to make AI more accessible and equitable; and
  3. The need for understanding theory of mind in AI systems and how such systems can dynamically adapt to a user’s emotional needs.

Wester et al. identified how self-presentation styles impact responses from an LLM.They examined several ways a human could formulate a request for help with a homework problem, with the LLM providing differing levels of detail to each request.

Several posters focused on ToM in human-AI collaboration. Hirschmann et al. explored the potential of an AI ToM capability to enhance human-robot collaboration in a cooperative game scenario. They examined how the complexity of AI decision-making and the minimization of ambiguity impacted collaboration. They found that agents which used simpler decision-making strategies, but who sought to minimize ambiguity, matched or exceeded agents that used more complex decision-making logic. These results highlight the importance for AI agents to make their behaviors clear and unambiguous to human collaborators.

Tsirtsis et al. examined how people assign responsibility to a human vs. an AI agent in a semi-autonomous driving environment. They proposed a model of responsibility that considers the expectedness of an agent’s action and the consequences of alternative actions.

Narayanan and Feigh identified factors that influence shared mental models in human-AI teams, with a focus on contexts in which decisions are made in a chain-of-command fashion. Through several illustrative examples of search and rescue missions conducted by human-AI teams, they emphasize the importance of studying the influence of decision hierarchies on the elicitation, development, and maintenance of shared mental models to achieve effective and optimal human-AI team performance.

Invited Panel on Theory of Mind in Human-AI Interaction

After the paper and poster sessions that showcased theoretical and empirical work on Theory of Mind in Human-AI interaction, we had a wonderful panel that brought everything together and sought to answer one key question:

What is the role of Theory of Mind in Human-AI interaction?

Our panel featured researchers from academia and industry research:

  • Yvonne Rogers, Professor of Interaction Design at University College London,
  • Tanya Kraljic, Staff UX Researcher at Google Research, and
  • Justin Weisz, our fellow workshop organizer and Research Manager and Senior Research Scientist at IBM Research.

The panel was moderated by Qiaosi (Chelsea) Wang, the lead organizer of our workshop.

During the discussion, the panelists highlighted the various roles that Theory of Mind could play within human-AI interaction. Yvonne raised the idea of using Theory of Mind to help inform the design of human-AI interaction. Decades of research in psychology have examined the role that Theory of Mind plays in human-human interactions. Yvonne challenged us to think about how this body of work could be extended to shape the design of human-AI interactions? Justin and Tanya echoed this point and suggested Theory of Mind as a means to an end rather than the end itself. In addition, they argued that the concept of “Theory of Mind” manifests differently across different academic disciplines. For example, HCI researchers often talk about “mental models,” whereas AI & machine learning researchers talk about “user modeling.” The panelists reflected on whether these differences in language use were getting in the way of establishing common ground, and they also advocated for focusing more on how the concept of “Theory of Mind” — no matter how it is labeled — can be used to enhance human-AI interactions.

The next topic the panelists discussed was how Theory of Mind could be used to inform the design of AI tools and products in human-AI communication. Justin drew on his experiences as a father of young children, where he has an understanding of the words his children do and do not yet have in their vocabulary, and uses it to introduce new words they don’t yet know. He argued that AI systems equipped with a ToM can have similar models of an individual’s knowledge, enabling them to communicate with users at an individual level. For example, a software engineer could ask a ToM-equipped AI assistant to explain a piece of source code, and that assistant would be able to produce an explanation at their level of comprehension. On a similar note, Tanya then talked about how equipping AI with ToM could help people interact more successfully with AI assistants. She pointed out that given the current design of AI assistants with opaque working mechanisms, the responsibility is put on users to figure out what the AI assistants can do. A lot of times, if the assistant doesn’t provide good-quality responses, some degree of blame lies in how it was prompted by the user. Tanya suggested that the idea of ToM can be powerful in considering how to enable AI systems to explain its working mechanism tailored to individual users to help users interact more successfully.

The final topic discussed by the panelists was the role of ToM in designing responsible and human-centered AI systems. Tanya pointed out that many AI technologies claim to have Theory of Mind capability when they actually do not. This type of claim only confuses people and leads to very high expectations of AI technology. Tanya suggested that it is part of our responsibility to be transparent about AI systems’ true capabilities. Yvonne and Justin also highlighted that limiting AI’s ToM capabilities in certain dimensions or its application contexts could be helpful in designing responsible AI systems. For example, Yvonne pointed out that we should question when ToM capabilities are or are not appropriate and useful. She also highlighted how the purpose of ToM can go beyond personalization, such as how a ToM-equipped AI could help people reflect and think about themselves and their own behaviors. Justin pointed out that the contexts in which we design ToM-enabled AI systems matter: in cooperative scenarios, a ToM-equipped AI may be beneficial to human-AI collaboration, but in competitive scenarios, a ToM-equipped AI may be able to take unfair advantage of people.

Our panel discussion highlighted how we are at an exciting and unpredictable time in history regarding the rapid development of AI technologies. As HCI researchers, it is imperative that we continue striving toward designing responsible and human-centered ways of incorporating Theory of Mind into the design of human-AI interactions.

Grand Challenges in Theory of Mind for Human-AI Interaction

We conducted a group activity in the afternoon to identify and address grand challenges in Mutual Theory of Mind. Our goal was to identify ambitious, but achievable, challenges that could be used to demonstrate or evaluate MToM. This activity engaged our workshop attendees by focusing their discussion on important technical, societal, and ethical challenges that accompany the development of Mutual Theory of Mind.

The first group activity focused on brainstorming challenges facing the field of Mutual Theory of Mind in human-AI interaction research. Small working groups generated a myriad of ideas, which they then clustered and summarized to the larger group. Many of these challenges were phrased as questions, such as:

  • What are the ethical implications of AI systems having ToM?
  • How should AI ToM be communicated to human users?

Some proposals focused on identifying specific development challenges:

  • Non-anthropomorphic ToM challenge: Can AI have a different type of theory of mind from us?
  • The Deception Challenge: Create an AI that predicts or detects human deception (a benchmark, a goal)

Many philosophical and ethical questions were raised in this session, including:

What are the ethical implications of non-human systems inferring human mental states (in a way that is different from human: human ToM inferences)? How might it impinge on human rights? (e.g. freedom of thought, autonomy)

In the second group activity, teams were asked to ideate on how researchers might approach one of the challenges identified by their team. Participants came up with a variety of approaches that coalesced into four areas: techniques, tools, measures, and theories.

The group activities led our workshop attendees to ponder some of the deeper philosophical questions within MToM in human-AI interaction. It also helped them identify some of the most impactful research directions and challenges in domains where MToM is relevant.

We hope these ideas will provide inspiration and direction to human-centered AI researchers examining issues around mutual theory of mind!


The Return of Jill Watson

Georgia Tech’s Design Intelligence Laboratory and NSF’s National AI Institute for Adult Learning and Online Education have developed a new version of the virtual teaching assistant named Jill Watson that uses OpenAI’s ChatGPT, performs better that OpenAI’s Assistant service, enhances teaching and social presence, and correlates with improvement in student grades. Insofar as we know, this is the first time a Chatbot has been shown to improve teaching presence in online education for adult learners.