Explainability through transparency and user control: a case-based recommender for engineering workers.
Martin, Kyle; Liret, Anne; Wiratunga, Nirmalie; Owusu, Gilbert; Kern, Mathias
Professor Nirmalie Wiratunga firstname.lastname@example.org
Within the service providing industries, field engineers can struggle to access tasks which are suited to their individual skills and experience. There is potential for a recommender system to improve access to information while being on site. However the smooth adoption of such a system is superseded by a challenge for exposing the human understandable proof of the machine reasoning.With that in mind, this paper introduces an explainable recommender system to facilitate transparent retrieval of task information for field engineers in the context of service delivery. The presented software adheres to the five goals of an explainable intelligent system and incorporates elements of both Case-Based Reasoning and heuristic techniques to develop a recommendation ranking of tasks. In addition we evaluate methods of building justifiable representations for similarity-based return on a classification task developed from engineers' notes. Our conclusion highlights the trade-off between performance and explainability.
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2018. Explainability through transparency and user control: a case-based recommender for engineering workers. In Minor, M. (ed.) Workshop proceedings for the 26th International conference on case-based reasoning (ICCBR 2018), 9-12 July 2018, Stockholm, Sweden. Stockholm: ICCBR [online], pages 22-31. Available from: http://iccbr18.com/wp-content/uploads/ICCBR-2018-V3.pdf#page=22
|Presentation Conference Type||Conference Paper (unpublished)|
|Conference Name||26th International conference on case-based reasoning (ICCBR 2018)|
|Conference Location||Stockholm, Sweden|
|Start Date||Jul 9, 2018|
|End Date||Jul 12, 2018|
|Deposit Date||Feb 4, 2019|
|Publicly Available Date||Feb 4, 2019|
|Keywords||Case based reasoning; Recommender systems; Explainable AI; Information retrieval; Machine learning|
MARTIN 2018 Explainability through transparency
Publisher Licence URL
You might also like
Clinical dialogue transcription error correction using Seq2Seq models.
Adapting semantic similarity methods for case-based reasoning in the Cloud.
How close is too close? Role of feature attributions in discovering counterfactual explanations.
A case-based approach for content planning in data-to-text generation.
MIRATAR: a virtual caregiver for active and healthy ageing.