Skip to main content

Research Repository

Advanced Search

Dr Kyle Martin


How close is too close? Role of feature attributions in discovering counterfactual explanations. (2022)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., NKISI-ORJI, I., PALIHAWADANA, C., CORSAR, D. and MARTIN, K. 2022. How close is too close? Role of feature attributions in discovering counterfactual explanations. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 33-47. Available from: https://doi.org/10.1007/978-3-031-14923-8_3

Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, counterfactuals are "actionable" explanations that help users to understand how model decisions can be changed by adapting features of an input. A cas... Read More about How close is too close? Role of feature attributions in discovering counterfactual explanations..

Computer vision and machine learning for medical image analysis: recent advances, challenges, and way forward. (2022)
Journal Article
ELYAN, E., VUTTIPITTAYAMONGKOL, P., JOHNSTON, P., MARTIN, K., MCPHERSON, K., MORENO-GARCIA, C.F., JAYNE, C. and SARKER, M.M.K. 2022. Computer vision and machine learning for medical image analysis: recent advances, challenges, and way forward. Artificial intelligence surgery [online], 2, pages 24-25. Available from: https://doi.org/10.20517/ais.2021.15

The recent development in the areas of deep learning and deep convolutional neural networks has significantly progressed and advanced the field of computer vision (CV) and image analysis and understanding. Complex tasks such as classifying and segmen... Read More about Computer vision and machine learning for medical image analysis: recent advances, challenges, and way forward..

DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. (2021)
Conference Proceeding
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. In Proceedings of 33rd IEEE (Institute of Electrical and Electronics Engineers) International conference on tools with artificial intelligence 2021 (ICTAI 2021), 1-3 November 2021, Washington, USA [virtual conference]. Piscataway: IEEE [online], pages 1466-1473. Available from: https://doi.org/10.1109/ICTAI52525.2021.00233

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies tha... Read More about DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods..

Actionable feature discovery in counterfactuals using feature relevance explainers. (2021)
Conference Proceeding
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Actionable feature discovery in counterfactuals using feature relevance explainers. In Borck, H., Eisenstadt, V., Sánchez-Ruiz, A. and Floyd, M. (eds.) ICCBR 2021 workshop proceedings (ICCBR-WS 2021): workshop proceedings for the 29th International conference on case-based reasoning co-located with the 29th International conference on case-case based reasoning (ICCBR 2021), 13-16 September 2021, Salamanca, Spain [virtual conference]. CEUR-WS proceedings, 3017. Aachen: CEUR-WS [online], pages 63-74. Available from: http://ceur-ws.org/Vol-3017/101.pdf

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machine Learning model outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to be able to reason with si... Read More about Actionable feature discovery in counterfactuals using feature relevance explainers..

Counterfactual explanations for student outcome prediction with Moodle footprints. (2021)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., NKILSI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Counterfactual explanations for student outcome prediction with Moodle footprints. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 1, pages 1-8. Available from: http://ceur-ws.org/Vol-2894/short1.pdf

Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to one that is more desirable. For this purpose a counterfactual explainer needs to be able to reason with simila... Read More about Counterfactual explanations for student outcome prediction with Moodle footprints..

Non-deterministic solvers and explainable AI through trajectory mining. (2021)
Conference Proceeding
FYVIE, M., MCCALL, J.A.W. and CHRISTIE, L.A. 2021. Non-deterministic solvers and explainable AI through trajectory mining. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 4, pages 75-78. Available from: http://ceur-ws.org/Vol-2894/poster2.pdf

Traditional methods of creating explanations from complex systems involving the use of AI have resulted in a wide variety of tools available to users to generate explanations regarding algorithm and network designs. This however has traditionally bee... Read More about Non-deterministic solvers and explainable AI through trajectory mining..

Similarity and explanation for dynamic telecommunication engineer support. (2021)
Thesis
MARTIN, K. 2021. Similarity and explanation for dynamic telecommunication engineer support. Robert Gordon University, PhD thesis. Hosted on OpenAIR [online]. Available from: https://doi.org/10.48526/rgu-wt-1447160

Understanding similarity between different examples is a crucial aspect of Case-Based Reasoning (CBR) systems, but learning representations optimised for similarity comparisons can be difficult. CBR systems typically rely on separate algorithms to le... Read More about Similarity and explanation for dynamic telecommunication engineer support..

Evaluating explainability methods intended for multiple stakeholders. (2021)
Journal Article
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2021. Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intelligenz [online], 35(3-4), pages 397-411. Available from: https://doi.org/10.1007/s13218-020-00702-6

Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanat... Read More about Evaluating explainability methods intended for multiple stakeholders..

Assessing the clinicians’ pathway to embed artificial intelligence for assisted diagnostics of fracture detection. (2020)
Conference Proceeding
MORENO-GARCÍA, C.F., DANG, T., MARTIN, K., PATEL, M., THOMPSON, A., LEISHMAN, L. and WIRATUNGA, N. 2020. Assessing the clinicians’ pathway to embed artificial intelligence for assisted diagnostics of fracture detection. In Bach, K., Bunescu, R., Marling, C. and Wiratunga, N. (eds.) Knowledge discovery in healthcare data 2020: proceedings of the 5th Knowledge discovery in healthcare data international workshop 2020 (KDH 2020), co-located with 24th European Artificial intelligence conference (ECAI 2020), 29-30 August 2020, [virtual conference]. CEUR workshop proceedings, 2675. Aachen: CEUR-WS [online], pages 63-70. Available from: http://ceur-ws.org/Vol-2675/paper10.pdf

Fracture detection has been a long-standingparadigm on the medical imaging community. Many algo-rithms and systems have been presented to accurately detectand classify images in terms of the presence and absence offractures in different parts of the... Read More about Assessing the clinicians’ pathway to embed artificial intelligence for assisted diagnostics of fracture detection..

Locality sensitive batch selection for triplet networks. (2020)
Conference Proceeding
MARTIN, K., WIRATUNGA, N. and SANI, S. 2020. Locality sensitive batch selection for triplet networks. In Proceedings of the 2020 Institute of Electrical and Electronics Engineers (IEEE) International joint conference on neural networks (IEEE IJCNN 2020), part of the 2020 IEEE World congress on computational intelligence (IEEE WCCI 2020) and co-located with the 2020 IEEE congress on evolutionary computation (IEEE CEC 2020) and the 2020 IEEE International fuzzy systems conference (FUZZ-IEEE 2020), 19-24 July 2020, [virtual conference]. Piscataway: IEEE [online], article ID 9207538. Available from: https://doi.org/10.1109/IJCNN48605.2020.9207538

Triplet networks are deep metric learners which learn to optimise a feature space using similarity knowledge gained from training on triplets of data simultaneously. The architecture relies on the triplet loss function to optimise its weights based u... Read More about Locality sensitive batch selection for triplet networks..