Developing a catalogue of explainability methods to support expert and non-expert users.
Martin, Kyle; Liret, Anne; Wiratunga, Nirmalie; Owusu, Gilbert; Kern, Mathias
Professor Nirmalie Wiratunga email@example.com
Organisations face growing legal requirements and ethical responsibilities to ensure that decisions made by their intelligent systems are explainable. However, provisioning of an explanation is often application dependent, causing an extended design phase and delayed deployment. In this paper we present an explainability framework formed of a catalogue of explanation methods, allowing integration to a range of projects within a telecommunications organisation. These methods are split into low-level explanations, high-level explanations and co-created explanations. We motivate and evaluate this framework using the specific case-study of explaining the conclusions of field engineering experts to non-technical planning staff. Feedback from an iterative co-creation process and a qualitative evaluation is indicative that this is a valuable development tool for use in future company projects.
|Start Date||Dec 17, 2019|
|Publication Date||Nov 30, 2019|
|Series Title||Lecture notes in computer science|
|Book Title||Artificial intelligence XXXVI|
|Institution Citation||MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2019. Developing a catalogue of explainability methods to support expert and non-expert users. In Bramer, M. and Petridis, M. (eds.) Artificial intelligence XXXVI: proceedings of the 39th British Computer Society's Specialist Group on Artificial Intelligence (SGAI) International conference on artificial intelligence (AI 2019), 17-19 December 2019, Cambridge, UK. Lecture notes in computer science, 11927. Cham: Springer [online], pages 309-324. Available from: https://doi.org/10.1007/978-3-030-34885-4_24|
|Keywords||Machine learning; Similarity modeling: Explainability: Information retrieval|
MARTIN 2019 Developing a catalogue
You might also like
Evaluating the transferability of personalised exercise recognition models.
Learning to recognise exercises for the self-management of low back pain.
A knowledge-light approach to personalised and open-ended human activity recognition.
Human activity recognition with deep metric learners.