Skip to main content

Research Repository

Advanced Search

Evaluating explainability methods intended for multiple stakeholders: an explainability framework for subject experts and non-experts in telecommunications.

Martin, Kyle; Liret, Anne; Wiratunga, Nirmalie; Owusu, Gilbert; Kern, Mathias

Authors

Kyle Martin

Anne Liret

Gilbert Owusu

Mathias Kern



Abstract

Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanation needs of different groups using an individual system. In this paper we present an explainability framework formed of a catalogue of explanation methods, and designed to integrate with a range of projects within a telecommunications organisation. Explainability methods are split into low-level explanations and high-level explanations for increasing levels of contextual support in their explanations. We motivate this framework using the specific case-study of explaining the conclusions of field network engineering experts to non-technical planning staff and evaluate our results using feedback from two distinct user groups; domain-expert telecommunication engineers and non-expert desk agent staff. We also present and investigate two metrics designed to model the quality of explanations - Meet-In-The-Middle (MITM) and Trust-Your-Neighbours (TYN). Our analysis of these metrics offers new insights into the use of similarity knowledge for the evaluation of explanations.

Citation

MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. [2020]. Evaluating explainability methods intended for multiple stakeholders: an explainability framework for subject experts and non-experts in telecommunications. KI - K√ľnstliche Intelligenz [online], (accepted).

Journal Article Type Article
Acceptance Date Dec 30, 2020
Deposit Date Jan 7, 2021
Journal KI - K√ľnstliche intelligenz
Print ISSN 0933-1875
Electronic ISSN 1610-1987
Publisher Springer Verlag
Peer Reviewed Peer Reviewed
Keywords Machine learning; Similarity modeling; Explainability; Information retrieval
Public URL https://rgu-repository.worktribe.com/output/1085000

This file is under embargo due to copyright reasons.

Contact publications@rgu.ac.uk to request a copy for personal use.




You might also like



Downloadable Citations