Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser.
Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval
Explainable AI (XAI) is currently a vibrant research topic. However, the absence of ground truth explanations makes it difficult to evaluate XAI systems such as Explainable Search. We present an Explainable Search system with a focus on evaluating the XAI aspect of Trustworthiness along with the retrieval performance. We present SIMFIC 2.0 (Similarity in Fiction), an enhanced version of a recent [1] explainable search system. The system retrieves books similar to a selected book in a query-by-example setting. The motivation is to explain the notion of similarity in fiction books. We extract hand-crafted interpretable features for fiction books and provide global explanations by fitting a linear regression and local explanations based on similarity measures. The Trustworthiness facet is evaluated using user studies, while the ranking performance is compared by analysis of user clicks. Eye tracking is used to investigate user attention to the explanation elements when interacting with the interface. Initial experiments show statistically significant results on the Trustworthiness of the system, paving way for interesting research directions that are being investigated. CCS CONCEPTS • Computing methodologies → Artificial intelligence; • Information systems → Evaluation of retrieval results.
2021
Nowadays, search engines, social media or news aggregators are the preferred services for news access. Aggregation is mostly based on artificial intelligence technologies raising a new challenge: Trust has been ranked as the most important factor for media business. This paper reports findings of a study evaluating the influence of manipulations of interface design and information provided in the context of eXplainable Artificial Intelligence (XAI) on user perception and in the context of news content aggregators. In an experimental online study, various layouts and scenarios have been developed, implemented and tested with 266 participants. Measures of trust, understanding and preference were recorded. Results showed no influence of the factors on trust. However, data indicates that the influence of the layout, for example implicit integration of media source through layout structuration has a significant effect on perceived importance to cite the source of a media. Moreover, the a...
Communications in Computer and Information Science, 2021
Text Similarity has significant application in many real-world problems. Text Similarity Estimation using NLP techniques can be leveraged for automating a variety of tasks that are relevant in business and social context. The outcomes given by AI-powered automated systems provide guidance for humans to take decisions. However, since the AI-powered system is a "blackbox", for the human to trust its outcome and to take the right decision or action based on the outcome, there needs to be an interface between the human and the machine which can explain the reason for the outcome and that interface is what we call "Explainable AI". In this paper, we have made a twofold attempt, first, 1) to build a state-of-the-art Text Similarity Scoring System which would match two texts based on semantic similarity and then, 2) build an Explanation Generation Methodology to generate human-interpretable explanation for the text similarity match score.
Int Journal of Human Computer Studies, 2021
Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors' perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems.
arXiv (Cornell University), 2018
The question addressed in this paper is: If we present to a user an AI system that explains how it works, how do we know whether the explanation works and the user has achieved a pragmatic understanding of the AI? In other words, how do we know that an explanainable AI system (XAI) is any good? Our focus is on the key concepts of measurement. We discuss specific methods for evaluating: (1) the goodness of explanations, (2) whether users are satisfied by explanations, (3) how well users understand the AI systems, (4) how curiosity motivates the search for explanations, (5) whether the user's trust and reliance on the AI are appropriate, and finally, (6) how the human-XAI work system performs. The recommendations we present derive from our integration of extensive research literatures and our own psychometric evaluations.
arXiv (Cornell University), 2023
EXplainable Artificial Intelligence (XAI) aims to help users to grasp the reasoning behind the predictions of an Artificial Intelligence (AI) system. Many XAI approaches have emerged in recent years. Consequently, a subfield related to the evaluation of XAI methods has gained considerable attention, with the aim to determine which methods provide the best explanation using various approaches and criteria. However, the literature lacks a comparison of the evaluation metrics themselves, that one can use to evaluate XAI methods. This work aims to fill this gap by comparing 14 different metrics when applied to nine stateof-the-art XAI methods and three dummy methods (e.g., random saliency maps) used as references. Experimental results show which of these metrics produces highly correlated results, indicating potential redundancy. We also demonstrate the significant impact of varying the baseline hyperparameter on the evaluation metric values. Finally, we use dummy methods to assess the reliability of metrics in terms of ranking, pointing out their limitations.
Data
As the performance and complexity of machine learning models have grown significantly over the last years, there has been an increasing need to develop methodologies to describe their behaviour. Such a need has mainly arisen due to the widespread use of black-box models, i.e., high-performing models whose internal logic is challenging to describe and understand. Therefore, the machine learning and AI field is facing a new challenge: making models more explainable through appropriate techniques. The final goal of an explainability method is to faithfully describe the behaviour of a (black-box) model to users who can get a better understanding of its logic, thus increasing the trust and acceptance of the system. Unfortunately, state-of-the-art explainability approaches may not be enough to guarantee the full understandability of explanations from a human perspective. For this reason, human-in-the-loop methods have been widely employed to enhance and/or evaluate explanations of machine...
Proceedings of the 12th International Conference on Agents and Artificial Intelligence, 2020
The communication between robots/agents and humans is a challenge, since humans are typically not capable of understanding the agent's state of mind. To overcome this challenge, this paper relies on recent advances in the domain of eXplainable Artificial Intelligence (XAI) to trace the decisions of the agents, increase the human's understandability of the agents' behavior, and hence improve efficiency and user satisfaction. In particular, we propose a Human-Agent EXplainability Architecture (HAEXA) to model human-agent explainability. HAEXA filters the explanations provided by the agents to the human user to reduce the user's cognitive load. To evaluate HAEXA, a human-computer interaction experiment is conducted, where participants watch an agent-based simulation of aerial package delivery and fill in a questionnaire that collects their responses. The questionnaire is built according to XAI metrics as established in the literature. The significance of the results is verified using Mann-Whitney U tests. The results show that the explanations increase the understandability of the simulation by human users. However, too many details in the explanations overwhelm them; hence, in many scenarios, it is preferable to filter the explanations.
2020
Explainability has been an important goal since the early days of Artificial Intelligence. Several approaches for producing explanations have been developed. However, many of these approaches were tightly coupled with the capabilities of the artificial intelligence systems at the time. With the proliferation of AI-enabled systems in sometimes critical settings, there is a need for them to be explainable to end-users and decision-makers. We present a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains. Additionally, borrowing from the strengths of past approaches and identifying gaps needed to make explanations user- and context-focused, we propose new definitions for explanations and explainable knowledge-enabled systems.
OAlib, 2022
In the contemporary era, artificial intelligence (AI) has introduced transformative advancements that have significant implications for society. Nevertheless, these advancements come with challenges, notably those associated with opacity, vulnerability, and interpretability. The integration of artificial intelligence (AI) systems into various aspects of human life has become increasingly pervasive. Consequently, there is a growing need to prioritize the development of trustworthy and explainable artificial intelligence (XAI) as a paramount concern within the field. The main purpose of this paper is to explore the paramount importance of XAI, clarify its multifaceted meanings, and outline which consists of a series of guiding principles essential for the development of XAI. These principles simultaneously act as overarching objectives, directing the course towards ensuring transparency, accountability, and reliability in AI systems. Additionally, the paper presents two novel strategies to actualize XAI, by narrowing the difference between AI's potential and human understanding. By addressing the intricate issues associated with XAI; this study adds to the continuing dialogue on how one might tap into the complete potential of AI technology, ensuring its responsible and ethical implementation in an ever-evolving digital environment.
2021
In this paper, we present the design and the implementation of a knowledge graph-based recommender system for research paper suggestion, along with two explainable interfaces which provide different types of explanations to the users interacting with the recommender. Our work, developed within the academic context of the Georg Eckert Institute for International Textbook Research, aims to assess the effectiveness of the explanation among the researchers of the institute and understand which characteristics of the interfaces themselves are perceived to be as most interpretable, leading to increase the trust and confidence in the recommender system and its credibility. We evaluated our work through a user study performed among different experts covering several research fields. All participants were asked to take part in an online survey, and a focus group answered some targeted interviews. This last qualitative evaluation aims better to understand the interaction patterns within the t...
Proceedings of the 31st ACM International Conference on Information & Knowledge Management
We present X-Vision, an explainable AI (XAI) driven image retrieval system based on a re-ranking approach to support non-expert users. We generate textual explanations such as, "This image is similar to query image in color by Y%, shape by Z%" along with visual explanations that compare image features. Besides the XAI goal of making AI systems transparent, we address the semantic gap between user's perception and model ranking, which arises in content based image retrieval (CBIR). We attempt to explain the notion of similarity in images in a query-by-example scenario, starting with relatively simple features such as color, texture, objects, background-foreground segments, moving to semantic representations learned from hidden layers of deep networks. The base retrieval model compares the query vector with other image feature vectors to create rankings. This result list is transferred to a semantic feature space that allows rule-based re-rankings. The core contribution of this work is a re-ranking algorithm for generating explanations. Our re-ranking improves retrieval performance (MAP) when compared with a base ranker, a random baseline, and recent CBIR baseline rankers on PASCAL VOC data. We evaluate XAI focused aspects of user trust in an eye-tracker based user study, we find that explanations supported users in the search process and understanding the notion of similarity. CCS CONCEPTS • Information systems → Similarity measures.
Journal of digital art & humanities, 2023
Explainable Artificial Intelligence (XAI) has emerged as a critical field in AI research, addressing the lack of transparency and interpretability in complex AI models. This conceptual review explores the significance of XAI in promoting trust and transparency in AI systems. The paper analyzes existing literature on XAI, identifies patterns and gaps, and presents a coherent conceptual framework. Various XAI techniques, such as saliency maps, attention mechanisms, rule-based explanations, and model-agnostic approaches, are discussed to enhance interpretability. The paper highlights the challenges posed by black-box AI models, explores the role of XAI in enhancing trust and transparency, and examines the ethical considerations and responsible deployment of XAI. By promoting transparency and interpretability, this review aims to build trust, encourage accountable AI systems, and contribute to the ongoing discourse on XAI.
2021
We present an explainable document search system (ExDocS), based on a re-ranking approach, that uses textual and visual explanations to explain document rankings to non-expert users. ExDocS attempts to answer questions such as “Why is document X ranked at Y for a given query?”, “How do we compare multiple documents to understand their relative rankings?”. The contribution of this work is on re-ranking methods based on various interpretable facets of evidence such as term statistics, contextual words, and citation-based popularity. Contribution from the user interface perspective consists of providing intuitive accessible explanations such as: “document X is at rank Y because of matches found like Z” along with visual elements designed to compare the evidence and thereby explain the rankings. The quality of our re-ranking approach is evaluated on benchmark data sets in an ad-hoc retrieval setting. Due to the absence of ground truth of explanations, we evaluate the aspects of interpre...
2020
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit provenance, and context-awareness. In this chapter, we leverage our survey of explanation literature in Artificial Intelligence and closely related fields and use these past efforts to generate a set of explanation types that we feel reflect the expanded needs of explanation for today's artificial intelligence applications. We define each type and provide an example question that would motivate the need for this style of explanation. We believe this set of explanation types will help future system ...
2020
The rapid and pervasive development of methods from Artificial Intelligence (AI) affects our everyday life. Its application improves the users' experience of many daily tasks. Despite the enhancements provided, such approaches have a substantial limitation in the shortfall of people's trust connected with their lack of explainability. In natural language understanding (NLU) and processing (NLP), a fundamental objective is to support human interactions using sense-making of the language for communication. Such methods try to comprehend and reproduce the self-evident processes of human communication. This applies either in receiving speech signals or in extracting relevant information from a text. Furthermore, the pervasiveness of AI methods in the workplace and on the free time demands a sustainable and verified support of users' trust, as a natural condition for their acceptance. The objective of this work is to introduce a framework for the calculation and selection of understandable text features. Such features can increase the confidence placed into adopted NLP solutions. The following work outlines the Text Feature Framework and its text features, based on statistical information coming from a general text corpus. The showcase experiment uses those features to verify them on the concept recognition task. The results shows their capability to explain a model and its predictions. The resulting concept recognition models are competitive with other methods existing in the literature. It has the definitive advantage of being able to externalize the supporting evidence for a choice of concept identification.
2021
Trust between humans and artificial intelligence(AI) is an issue which has implications in many fields of human computer interaction. The current issue with artificial intelligence is a lack of transparency into its decision making, and literature shows that increasing transparency increases trust. Explainable artificial intelligence has the ability to increase transparency of AI, which could potentially increase trust for humans. This paper attempts to use the task of predicting yelp review star ratings with assistance from an explainable and non explainable artificial intelligence to see if trust is increased with increased transparency. Results show that for these tasks, explainable artificial intelligence provided significant increase in trust as a measure of influence.
Proceedings of the 16th ACM Conference on Recommender Systems
The goal of this tutorial is to present the RecSys community with recent advances on explainable recommender systems with knowledge graphs. We will first introduce conceptual foundations, by surveying the state of the art and describing real-world examples of how knowledge graphs are being integrated into the recommendation pipeline, also for the purpose of providing explanations. This tutorial will continue with a systematic presentation of algorithmic solutions to model, integrate, train, and assess a recommender system with knowledge graphs, with particular attention to the explainability perspective. A practical part will then provide attendees with concrete implementations of recommender systems with knowledge graphs, leveraging open-source tools and public datasets; in this part, tutorial participants will be engaged in the design of explanations accompanying the recommendations and in articulating their impact. We conclude the tutorial by analyzing emerging open issues and future directions. Website: https://explainablerecsys.github.io/recsys2022/. CCS CONCEPTS • Information systems → Recommender systems; • Applied computing → Law, social and behavioral sciences; • Computing methodologies → Semantic networks.
Journal on Multimodal User Interfaces
While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility, especially for end-users. In this paper, we explore the effects of incorporating virtual agents into explainable artificial intelligence (XAI) designs on the perceived trust of end-users. For this purpose, we conducted a user study based on a simple speech recognition system for keyword classification. As a result of this experiment, we found that the integration of virtual agents leads to increased user trust in the XAI system. Furthermore, we found that the user’s trust significantly depends on the modalities that are used within the user-agent interface design. The results of our study show a linear trend where the visual presence of an agent combined with a voice output resulted in greater trust than the output of text or the voice output alone. Additionally, we an...
2021
Explainability is a key requirement for text classification in many application domains ranging from sentiment analysis to medical diagnosis or legal reviews. Existing methods often rely on “attention” mechanisms for explaining classification results by estimating the relative importance of input units. However, recent studies have shown that such mechanisms tend to mis-identify irrelevant input units in their explanation. In this work, we propose a hybrid human-AI approach that incorporates human rationales into attention-based text classification models to improve the explainability of classification results. Specifically, we ask workers to provide rationales for their annotation by selecting relevant pieces of text. We introduce MARTA, a Bayesian framework that jointly learns an attention-based model and the reliability of workers while injecting human rationales into model training. We derive a principled optimization algorithm based on variational inference with efficient updat...
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.