Skip to main content

Research Repository

Advanced Search

DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. (2021)
Conference Proceeding
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. In Proceedings of 33rd IEEE (Institute of Electrical and Electronics Engineers) International conference on tools with artificial intelligence 2021 (ICTAI 2021), 1-3 November 2021, Washington, USA [virtual conference]. Piscataway: IEEE [online], pages 1466-1473. Available from: https://doi.org/10.1109/ICTAI52525.2021.00233

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies tha... Read More about DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods..

Counterfactual explanations for student outcome prediction with Moodle footprints. (2021)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., NKILSI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Counterfactual explanations for student outcome prediction with Moodle footprints. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 1, pages 1-8. Available from: http://ceur-ws.org/Vol-2894/short1.pdf

Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to one that is more desirable. For this purpose a counterfactual explainer needs to be able to reason with simila... Read More about Counterfactual explanations for student outcome prediction with Moodle footprints..

Evaluating explainability methods intended for multiple stakeholders. (2021)
Journal Article
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2021. Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intelligenz [online], 35(3-4), pages 397-411. Available from: https://doi.org/10.1007/s13218-020-00702-6

Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanat... Read More about Evaluating explainability methods intended for multiple stakeholders..