Skip to main content

Research Repository

Advanced Search

All Outputs (32)

iSee: a case-based reasoning platform for the design of explanation experiences. (2024)
Journal Article
CARO-MARTÍNEZ, M., RECIO-GARCÍA, J.A., DÍAZ-AGUDO, B., DARIAS, J.M., WIRATUNGA, N., MARTIN, K., WIJEKOON, A., NKISI-ORJI, I., CORSAR, D., PRADEEP, P., BRIDGE, D. and LIRET, A. 2024. iSee: a case-based reasoning platform for the design of explanation experiences. Knowledge-based systems [online], 302, article number 112305. Available from: https://doi.org/10.1016/j.knosys.2024.112305

Explainable Artificial Intelligence (XAI) is an emerging field within Artificial Intelligence (AI) that has provided many methods that enable humans to understand and interpret the outcomes of AI systems. However, deciding on the best explanation app... Read More about iSee: a case-based reasoning platform for the design of explanation experiences..

iSee: demonstration video. [video recording] (2023)
Digital Artefact
WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D. and MARTIN, K. 2023. iSee: demonstration video. [video recording]. New York: ACM [online]. Available from: https://dl.acm.org/doi/10.1145/3581754.3584137#sec-supp

This output presents a demonstration of the iSee platform. iSee is an ongoing project aimed at improving the user experience of AI by harnessing experiences and best practices in Explainable AI. To this end, iSee brings together research and developm... Read More about iSee: demonstration video. [video recording].

Escaping traditional outreach: digital escape rooms to engage potential students. (2022)
Presentation / Conference Contribution
MARTIN, K., WRIGHT, R. and ZARB, M. 2022. Escaping traditional outreach: digital escape rooms to engage potential students. Presented at the 2022 RGU annual learning and teaching conference (RGU LTC 2022): enhancing for impact, 21 October 2022, Aberdeen, UK.

Outreach has been linked to many advantages, including improving access for typically disadvantaged students and raising their completion rates. The "Access To" programme is an outreach initiative for under-represented groups in secondary education.... Read More about Escaping traditional outreach: digital escape rooms to engage potential students..

Clinical dialogue transcription error correction using Seq2Seq models. (2022)
Preprint / Working Paper
NANAYAKKARA, G., WIRATUNGA, N., CORSAR, D., MARTIN, K. and WIJEKOON, A. 2022. Clinical dialogue transcription error correction using Seq2Seq models. arXiv [online]. Available from: https://doi.org/10.48550/arXiv.2205.13572

Good communication is critical to good healthcare. Clinical dialogue is a conversation between health practitioners and their patients, with the explicit goal of obtaining and sharing medical information. This information contributes to medical decis... Read More about Clinical dialogue transcription error correction using Seq2Seq models..

Computer vision and machine learning for medical image analysis: recent advances, challenges, and way forward. (2022)
Journal Article
ELYAN, E., VUTTIPITTAYAMONGKOL, P., JOHNSTON, P., MARTIN, K., MCPHERSON, K., MORENO-GARCIA, C.F., JAYNE, C. and SARKER, M.M.K. 2022. Computer vision and machine learning for medical image analysis: recent advances, challenges, and way forward. Artificial intelligence surgery [online], 2, pages 24-25. Available from: https://doi.org/10.20517/ais.2021.15

The recent development in the areas of deep learning and deep convolutional neural networks has significantly progressed and advanced the field of computer vision (CV) and image analysis and understanding. Complex tasks such as classifying and segmen... Read More about Computer vision and machine learning for medical image analysis: recent advances, challenges, and way forward..

Similarity and explanation for dynamic telecommunication engineer support. (2021)
Thesis
MARTIN, K. 2021. Similarity and explanation for dynamic telecommunication engineer support. Robert Gordon University, PhD thesis. Hosted on OpenAIR [online]. Available from: https://doi.org/10.48526/rgu-wt-1447160

Understanding similarity between different examples is a crucial aspect of Case-Based Reasoning (CBR) systems, but learning representations optimised for similarity comparisons can be difficult. CBR systems typically rely on separate algorithms to le... Read More about Similarity and explanation for dynamic telecommunication engineer support..

Evaluating explainability methods intended for multiple stakeholders. (2021)
Journal Article
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2021. Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intelligenz [online], 35(3-4), pages 397-411. Available from: https://doi.org/10.1007/s13218-020-00702-6

Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanat... Read More about Evaluating explainability methods intended for multiple stakeholders..

CBR driven interactive explainable AI.
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., MARTIN, K., CORSAR, D., NKISI-ORJI, I., PALIHAWADANA, C., BRIDGE, D., PRADEEP, P., AGUDO, B.D. and CARO-MARTÍNEZ, M. 2023. CBR driven interactive explainable AI. In MASSIE, S. and CHAKRABORTI, S. (eds.) 2023. Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023, (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages169-184. Available from: https://doi.org/10.1007/978-3-031-40177-0_11

Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requi... Read More about CBR driven interactive explainable AI..

CBR-RAG: case-based reasoning for retrieval augmented generation in LLMs for legal question answering.
Presentation / Conference Contribution
WIRATUNGA, N., ABEYRATNE, R., JAYAWARDENA, L., MARTIN, K., MASSIE, S., NKISI-ORJI, I., WEERASINGHE, R., LIRET, A. and FLEISCH, B. 2024. CBR-RAG: case-based reasoning for retrieval augmented generation in LLMs for legal question answering. In Recio-Garcia, J.A., Orozco-del-Castillo, M.G. and Bridge, D (eds.) Case-based reasoning research and development: proceedings of the 32nd International conference of case-based reasoning research and development 2024 (ICCBR 2024), 1-4 July 2024, Merida, Mexico. Lecture notes in computer science, 14775. Cham: Springer [online], pages 445-460. Available from: https://doi.org/10.1007/978-3-031-63646-2_29

Retrieval-Augmented Generation (RAG) enhances Large Language Model (LLM) output by providing prior knowledge as context to input. This is beneficial for knowledge-intensive and expert reliant tasks, including legal question-answering, which require e... Read More about CBR-RAG: case-based reasoning for retrieval augmented generation in LLMs for legal question answering..

Clinical dialogue transcription error correction using Seq2Seq models.
Presentation / Conference Contribution
NANAYAKKARA, G., WIRATURNGA, N., CORSAR, D., MARTIN, K. and WIJEKOON, A. 2022. Clinical dialogue transcription error correction using Seq2Seq models. In Shaban-Nejad, A., Michalowski, M. and Bianco, S. (eds.) Multimodal AI in healthcare: a paradigm shift in health intelligence; selected papers from the 6th International workshop on health intelligence (W3PHIAI-22), co-located with the 34th AAAI (Association for the Advancement of Artificial Intelligence) Innovative applications of artificial intelligence (IAAI-22), 28 February - 1 March 2022, [virtual event]. Studies in computational intelligence, 1060. Cham: Springer [online], pages 41-57. Available from: https://doi.org/10.1007/978-3-031-14771-5_4

Good communication is critical to good healthcare. Clinical dialogue is a conversation between health practitioners and their patients, with the explicit goal of obtaining and sharing medical information. This information contributes to medical decis... Read More about Clinical dialogue transcription error correction using Seq2Seq models..

How close is too close? Role of feature attributions in discovering counterfactual explanations.
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., NKISI-ORJI, I., PALIHAWADANA, C., CORSAR, D. and MARTIN, K. 2022. How close is too close? Role of feature attributions in discovering counterfactual explanations. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 33-47. Available from: https://doi.org/10.1007/978-3-031-14923-8_3

Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, counterfactuals are "actionable" explanations that help users to understand how model decisions can be changed by adapting features of an input. A cas... Read More about How close is too close? Role of feature attributions in discovering counterfactual explanations..

Empowering inquiry-based learning in short courses for professional students.
Presentation / Conference Contribution
MARTIN, K., ZARB, M., MCDERMOTT, R. and YOUNG, T. 2023. Empowering inquiry-based learning in short courses for professional students. In Chova, L.G., Martínez, C.G. and Lees, J. (eds.) Proceedings of the 17th International technology, education and development conference 2023 (INTED 2023), 6-8 March 2023, Valencia, Spain. Valencia: IATED [online], pages 5404-5409. Available from: https://doi.org/10.21125/inted.2023.1407

This paper presents the pedagogic underpinning for the development of an online postgraduate short course educating participants on multi-modal data science, specifically within the context of the digital health industry. The growing digital health s... Read More about Empowering inquiry-based learning in short courses for professional students..

iSee: intelligent sharing of explanation experience of users for users.
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D. and MARTIN, K. 2023. iSee: intelligent sharing of explanation experience of users for users. In IUI '23 companion: companion proceedings of the 28th Intelligent user interfaces international conference 2023 (IUI 2023), 27-31 March 2023, Sydney, Australia. New York: ACM [online], pages 79-82. Available from: https://doi.org/10.1145/3581754.3584137

The right to obtain an explanation of the decision reached by an Artificial Intelligence (AI) model is now an EU regulation. Different stakeholders of an AI system (e.g. managers, developers, auditors, etc.) may have different background knowledge, c... Read More about iSee: intelligent sharing of explanation experience of users for users..

Machine learning for risk stratification of diabetic foot ulcers using biomarkers.
Presentation / Conference Contribution
MARTIN, K., UPADHYAY, A., WIJEKOON, A., WIRATUNGA, N. and MASSIE, S. 2023. Machine learning for risk stratification of diabetic foot ulcers using biomarkers. In Mikyška, J., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M. (eds) Computational science: proceedings of the 23rd International conference on computational science 2023 (ICCS 2023): computing at the cutting edge of science (ICCS 2023), 3-5 July 2023, Prague, Czech Republic: [virtual event]. Lecture notes in computer science, 14075. Cham: Springer [online], part III, pages 153-161. Available from: https://doi.org/10.1007/978-3-031-36024-4_11

Development of a Diabetic Foot Ulcer (DFU) causes a sharp decline in a patient's health and quality of life. The process of risk stratification is crucial for informing the care that a patient should receive to help manage their Diabetes before an ul... Read More about Machine learning for risk stratification of diabetic foot ulcers using biomarkers..

iSee: intelligent sharing of explanation experiences.
Presentation / Conference Contribution
MARTIN, K., WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D., DÍAZ-AGUDO, B., RECIO-GARCÍA, J.A., CARO-MARTÍNEZ, M., BRIDGE, D., PRADEEP, P., LIRET, A. and FLEISCH, B. 2022. iSee: intelligent sharing of explanation experiences. In Reuss, P. and Schönborn, J. (eds.) Workshop proceedings of the 30th International conference on case-based reasoning (ICCBR-WS 2022), 12-15 September 2022, Nancy, France. CEUR workshop proceedings, 3389. Aachen: CEUR-WS [online], pages 231-232. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_83.pdf

The right to an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. However, different system stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of ex... Read More about iSee: intelligent sharing of explanation experiences..

Proceedings of the 2021 SICSA explainable artificial intelligence workshop (SICSA XAI 2021)
Presentation / Conference Contribution
MARTIN, K., WIRATUNGA, N. and WIJEKOON, A. (eds.) 2021. Proceedings of the 2021 SICSA explainable artificial intelligence workshop (SICSA XAI 2021), 1 June 2021, Aberdeen, UK. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online]. Available from: https://ceur-ws.org/Vol-2894/

The SICSA Workshop 2021 was designed to present a forum for the dissemination of ideas on domains relating to the explainability of Artificial Intelligence and Machine Learning methods. The event was organised into several themed sessions: Session 1... Read More about Proceedings of the 2021 SICSA explainable artificial intelligence workshop (SICSA XAI 2021).

Evaluating a pass/fail grading model in first year undergraduate computing.
Presentation / Conference Contribution
ZARB, M., MCDERMOTT, R., MARTIN, K., YOUNG, T. and MCGOWAN, J. 2023. Evaluating a pass/fail grading model in first year undergraduate computing. In Proceedings of the 2023 IEEE (Institute of Electrical and Electronics Engineers) Frontiers in education conference (FIE 2023), 18-21 October 2023, College Station, TX, USA. Piscataway: IEEE [online], article 10343276. Available from: https://doi.org/10.1109/FIE58773.2023.10343276

This Innovative Practice Full Paper investigates the implications of implementing a Pass/Fail marking scheme within the undergraduate curriculum, specifically across first year computing modules in a Scottish Higher Education Institution. The motivat... Read More about Evaluating a pass/fail grading model in first year undergraduate computing..

Clinical dialogue transcription error correction with self-supervision.
Presentation / Conference Contribution
NANAYAKKARA, G., WIRATUNGA, N., CORSAR, D., MARTIN, K. and WIJEKOON, A. 2023. Clinical dialogue transcription error correction with self-supervision. In Bramer, M. and Stahl, F. (eds.) Artificial intelligence XL: proceedings of the 43rd SGAI international conference on artificial intelligence (AI-2023), 12-14 December 2023, Cambridge, UK. Lecture notes in computer science, 14381. Cham: Springer [online], pages 33-46. Available from: https://doi.org/10.1007/978-3-031-47994-6_3

A clinical dialogue is a conversation between a clinician and a patient to share medical information, which is critical in clinical decision-making. The reliance on manual note-taking is highly inefficient and leads to transcription errors when digit... Read More about Clinical dialogue transcription error correction with self-supervision..

Building personalised XAI experiences through iSee: a case-based reasoning-driven platform.
Presentation / Conference Contribution
CARO-MARTÍNEZ, M., LIRET, A., DÍAZ-AGUDO, B., RECIO-GARCÍA, J.A., DARIAS, J., WIRATUNGA, N., WIJEKOON, A., MARTIN, K., NKISI-ORJI, I., CORSAR, D., PALIHAWADANA, C., PIRIE, C., BRIDGE, D., PRADEEP, P. and FLEISCH, B. 2024. Building personalised XAI experiences through iSee: a case-based reasoning-driven platform. In Longo, L., Liu, W. and Montavon, G. (eds.) xAI-2024: LB/D/DC: joint proceedings of the xAI 2024 late-breaking work, demos and doctoral consortium, co-located with the 2nd World conference on eXplainable artificial intelligence (xAI 2024), 17-19 July 2024, Valletta, Malta. Aachen: CEUR-WS [online], 3793, pages 313-320. Available from: https://ceur-ws.org/Vol-3793/paper_40.pdf

Nowadays, eXplainable Artificial Intelligence (XAI) is well-known as an important field in Computer Science due to the necessity of understanding the increasing complexity of Artificial Intelligence (AI) systems or algorithms. This is the reason why... Read More about Building personalised XAI experiences through iSee: a case-based reasoning-driven platform..

Actionable feature discovery in counterfactuals using feature relevance explainers.
Presentation / Conference Contribution
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Actionable feature discovery in counterfactuals using feature relevance explainers. In Borck, H., Eisenstadt, V., Sánchez-Ruiz, A. and Floyd, M. (eds.) Workshop proceedings of the 29th International conference on case-based reasoning (ICCBR-WS 2021), 13-16 September 2021, [virtual event]. CEUR workshop proceedings, 3017. Aachen: CEUR-WS [online], pages 63-74. Available from: http://ceur-ws.org/Vol-3017/101.pdf

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machine Learning model outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to be able to reason with si... Read More about Actionable feature discovery in counterfactuals using feature relevance explainers..