Abstract
The rapid proliferation of Explainable Artificial Intelligence (XAI) has led to the development of numerous evaluation frameworks aimed at guiding and optimizing its design. However, these frameworks often emphasize the technical properties of XAI artifacts, overlooking the nuanced perceptions and values of end-users. Recognizing that XAI impacts society and individuals in non-neutral ways, this study adopts a human-centered approach to systematically examine the effects of recommended XAI properties on the general public in everyday scenarios through a formative study involving 87 end-users. The findings reveal that comprehensibility is the most valued XAI property, while frequently advocated properties like contrastivity may have overall negative effects. These results highlight the necessity of a goal-driven reverse engineering approach that integrates human values into XAI design to ensure positive user outcomes. This paper bridges the gap between XAI design and its impact on end-users, offering practical guidance for user-centered XAI development in everyday contexts.