Explainable AI for Daily Scenarios from End-Users’ Perspective: Non-Use, Concerns, and Ideal Design

Abstract

Centering humans in explainable artificial intelligence (XAI) research has primarily focused on AI model development and high-stake scenarios. However, as AI becomes increasingly integrated into everyday applications in often opaque ways, the need for explainability tailored to end-users has grown more urgent. To address this gap, we explore end-users’ perspectives on embedding XAI into daily AI application scenarios. Our findings reveal that XAI is not naturally accepted by end-users in their daily lives. When users seek explanations, they envision XAI design that promotes contextualized understanding, empowers adoption and adaption to AI systems, and considers multistakeholders’ values. We further discuss supporting users’ agency in XAI non-use and alternatives to XAI for managing ambiguity in AI interactions. Additionally, we provide design implications for XAI design at personal and societal levels. These include understanding users through a computational rationality lens, adaptive design that coevolves with users, and advancing the “society-in-the-loop” vision with everyday XAI.

Explainable AI for Daily Scenarios from End-Users’ Perspective: Non-Use, Concerns, and Ideal Design

Posted in .