Kyle Martin
Evaluating explainability methods intended for multiple stakeholders.
Martin, Kyle; Liret, Anne; Wiratunga, Nirmalie; Owusu, Gilbert; Kern, Mathias
Authors
Anne Liret
Professor Nirmalie Wiratunga n.wiratunga@rgu.ac.uk
Professor
Gilbert Owusu
Mathias Kern
Abstract
Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanation needs of different groups using an individual system. In this paper we present an explainability framework formed of a catalogue of explanation methods, and designed to integrate with a range of projects within a telecommunications organisation. Explainability methods are split into low-level explanations and high-level explanations for increasing levels of contextual support in their explanations. We motivate this framework using the specific case-study of explaining the conclusions of field network engineering experts to non-technical planning staff and evaluate our results using feedback from two distinct user groups; domain-expert telecommunication engineers and non-expert desk agent staff. We also present and investigate two metrics designed to model the quality of explanations - Meet-In-The-Middle (MITM) and Trust-Your-Neighbours (TYN). Our analysis of these metrics offers new insights into the use of similarity knowledge for the evaluation of explanations.
Citation
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. [2021]. Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intelligenz [online], Online First. Available from: https://doi.org/10.1007/s13218-020-00702-6
Journal Article Type | Article |
---|---|
Acceptance Date | Dec 31, 2020 |
Online Publication Date | Feb 7, 2021 |
Deposit Date | Jan 7, 2021 |
Publicly Available Date | Feb 7, 2021 |
Journal | KI - Künstliche intelligenz |
Print ISSN | 0933-1875 |
Electronic ISSN | 1610-1987 |
Publisher | Springer Verlag |
Peer Reviewed | Peer Reviewed |
DOI | https://doi.org/10.1007/s13218-020-00702-6 |
Keywords | Machine learning; Similarity modeling; Explainability; Information retrieval |
Public URL | https://rgu-repository.worktribe.com/output/1085000 |
Files
MARTIN 2021 Evaluating explainability
(2.1 Mb)
PDF
Publisher Licence URL
https://creativecommons.org/licenses/by/4.0/
You might also like
Assessing the clinicians’ pathway to embed artificial intelligence for assisted diagnostics of fracture detection.
(2020)
Conference Proceeding
Locality sensitive batch selection for triplet networks.
(2020)
Conference Proceeding
Preface: case-based reasoning and deep learning.
(2020)
Conference Proceeding
Human activity recognition with deep metric learners.
(2020)
Conference Proceeding
Developing a catalogue of explainability methods to support expert and non-expert users.
(2019)
Conference Proceeding