Dr Kyle Martin k.martin3@rgu.ac.uk
Lecturer
Evaluating explainability methods intended for multiple stakeholders.
Martin, Kyle; Liret, Anne; Wiratunga, Nirmalie; Owusu, Gilbert; Kern, Mathias
Authors
Anne Liret
Professor Nirmalie Wiratunga n.wiratunga@rgu.ac.uk
Associate Dean for Research
Gilbert Owusu
Mathias Kern
Abstract
Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanation needs of different groups using an individual system. In this paper we present an explainability framework formed of a catalogue of explanation methods, and designed to integrate with a range of projects within a telecommunications organisation. Explainability methods are split into low-level explanations and high-level explanations for increasing levels of contextual support in their explanations. We motivate this framework using the specific case-study of explaining the conclusions of field network engineering experts to non-technical planning staff and evaluate our results using feedback from two distinct user groups; domain-expert telecommunication engineers and non-expert desk agent staff. We also present and investigate two metrics designed to model the quality of explanations - Meet-In-The-Middle (MITM) and Trust-Your-Neighbours (TYN). Our analysis of these metrics offers new insights into the use of similarity knowledge for the evaluation of explanations.
Citation
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2021. Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intelligenz [online], 35(3-4), pages 397-411. Available from: https://doi.org/10.1007/s13218-020-00702-6
Journal Article Type | Article |
---|---|
Acceptance Date | Dec 31, 2020 |
Online Publication Date | Feb 7, 2021 |
Publication Date | Nov 30, 2021 |
Deposit Date | Jan 7, 2021 |
Publicly Available Date | Mar 29, 2024 |
Journal | KI - Künstliche intelligenz |
Print ISSN | 0933-1875 |
Electronic ISSN | 1610-1987 |
Publisher | Springer |
Peer Reviewed | Peer Reviewed |
Volume | 35 |
Issue | 3-4 |
Pages | 397-411 |
DOI | https://doi.org/10.1007/s13218-020-00702-6 |
Keywords | Machine learning; Similarity modeling; Explainability; Information retrieval |
Public URL | https://rgu-repository.worktribe.com/output/1085000 |
Files
MARTIN 2021 Evaluating explainability (VOR)
(2.1 Mb)
PDF
Publisher Licence URL
https://creativecommons.org/licenses/by/4.0/
You might also like
iSee: intelligent sharing of explanation experience of users for users.
(2023)
Conference Proceeding
iSee: demonstration video. [video recording]
(2023)
Digital Artefact
Adapting semantic similarity methods for case-based reasoning in the Cloud.
(2022)
Conference Proceeding
How close is too close? Role of feature attributions in discovering counterfactual explanations.
(2022)
Conference Proceeding
A case-based approach for content planning in data-to-text generation.
(2022)
Conference Proceeding
Downloadable Citations
About OpenAIR@RGU
Administrator e-mail: publications@rgu.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search