Skip to main content

Research Repository

Advanced Search

Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method. (2023)
Conference Proceeding
SALIMI, P., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2023. Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method. In Gal, K., Nowé, A., Nalepa, G.J., Fairstein, R. and Rădulescu, R. (eds.) ECAI 2023: proceedings of the 26th European conference on artificial intelligence (ECAI 2023), 30 September - 4 October 2023, Kraków, Poland. Frontiers in artificial intelligence and applications, 372. Amsterdam: IOS Press [online], pages 2057-2064. Available from: https://doi.org/10.3233/FAIA230499

Counterfactual Explanations (cf-XAI) describe the smallest changes in feature values necessary to change an outcome from one class to another. However, many cf-XAI methods neglect the feasibility of those changes. In this paper, we introduce a novel... Read More about Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method..

CBR driven interactive explainable AI. (2023)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., MARTIN, K., CORSAR, D., NKISI-ORJI, I., PALIHAWADANA, C., BRIDGE, D., PRADEEP, P., AGUDO, B.D. and CARO-MARTÍNEZ, M. 2023. CBR driven interactive explainable AI. In MASSIE, S. and CHAKRABORTI, S. (eds.) 2023. Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023, (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages169-184. Available from: https://doi.org/10.1007/978-3-031-40177-0_11

Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requi... Read More about CBR driven interactive explainable AI..

Failure-driven transformational case reuse of explanation strategies in CloodCBR. (2023)
Conference Proceeding
NKISI-ORJI, I., PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and CORSAR, D. 2023. Failure-driven transformational case reuse of explanation strategies in CloodCBR. In Massie, S. and Chakraborti, S. (eds.) Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023 (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages 279-293. Available from: https://doi.org/10.1007/978-3-031-40177-0_18

In this paper, we propose a novel approach to improve problem-solving efficiency through the reuse of case solutions. Specifically, we introduce the concept of failure-driven transformational case reuse of explanation strategies, which involves trans... Read More about Failure-driven transformational case reuse of explanation strategies in CloodCBR..

A user-centred evaluation of DisCERN: discovering counterfactuals for code vulnerability detection and correction. (2023)
Journal Article
WIJEKOON, A. and WIRATUNGA, N. 2023. A user-centred evaluation of DisCERN: discovering counterfactuals for code vulnerability detection and correction. Knowledge-based systems [online], 278, article 110830. Available from: https://doi.org/10.1016/j.knosys.2023.110830

Counterfactual explanations highlight actionable knowledge which helps to understand how a machine learning model outcome could be altered to a more favourable outcome. Understanding actionable corrections in source code analysis can be critical to p... Read More about A user-centred evaluation of DisCERN: discovering counterfactuals for code vulnerability detection and correction..

AGREE: a feature attribution aggregation framework to address explainer disagreements with alignment metrics. (2023)
Conference Proceeding
PIRIE, C., WIRATUNGA, N., WIJEKOON, A. and MORENO-GARCIA, C.F. 2023. AGREE: a feature attribution aggregation framework to address explainer disagreements with alignment metrics. In Malburg, L. and Verma, D. (eds.) Proceedings of the 31st International conference on case-based reasoning workshops (ICCBR-WS 2023), co-located with the 31st International conference on case-based reasoning (ICCBR 2023), 17 July 2023, Aberdeen, UK. CEUR workshop proceedings, 3438. Aachen: CEUR-WS [online], pages 184-199. Available from: https://ceur-ws.org/Vol-3438/paper_14.pdf

As deep learning models become increasingly complex, practitioners are relying more on post hoc explanation methods to understand the decisions of black-box learners. However, there is growing concern about the reliability of feature attribution expl... Read More about AGREE: a feature attribution aggregation framework to address explainer disagreements with alignment metrics..

Machine learning for risk stratification of diabetic foot ulcers using biomarkers. (2023)
Conference Proceeding
MARTIN, K., UPHADYAY, A., WIJEKOON, A., WIRATUNGA, N. and MASSIE, S. [2023]. Machine learning for risk stratification of diabetic foot ulcers using biomarkers. To be presented at the 2023 International conference on computational science (ICCS 2023): computing at the cutting edge of science, 3-5 July 2023, Prague, Czech Republic: [virtual event].

Development of a Diabetic Foot Ulcer (DFU) causes a sharp decline in a patient's health and quality of life. The process of risk stratification is crucial for informing the care that a patient should receive to help manage their Diabetes before an ul... Read More about Machine learning for risk stratification of diabetic foot ulcers using biomarkers..

Introducing Clood CBR: a cloud based CBR framework. (2023)
Conference Proceeding
PALIHAWADANA, C., NKISI-ORJI, I., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2022. Introducing Clood CBR: a cloud based CBR framework. In Reuss, P. and Schönborn, J. (eds.) ICCBR-WS 2022: proceedings of the 30th International conference on Case-based reasoning workshops 2022 (ICCBR-WS 2022) co-located with the 30th International conference on Case-based reasoning 2022 (ICCBR 2022), 12-15 September 2022, Nancy, France. Aachen: CEUR workshop proceedings [online], 3389, pages 233-234. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_108.pdf

CBR applications have been deployed in a wide range of sectors, from pharmaceuticals; to defence and aerospace to IoT and transportation, to poetry and music generation; for example. However, a majority of applications have been built using monolithi... Read More about Introducing Clood CBR: a cloud based CBR framework..

iSee: intelligent sharing of explanation experiences. (2023)
Conference Proceeding
MARTIN, K., WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJIC, I., CORSAR, D., DÍAZ-AGUDO, B., RECIO-GARCÍA, J.A., CARO-MARTÍNEZ, M., BRIDGE, D., PRADEEP, P., LIRET, A. and FLEISCH, B. 2022. iSee: intelligent sharing of explanation experiences. In Reuss, P. and Schönborn, J. (eds.) ICCBR-WS 2022: proceedings of the 30th International conference on Case-based reasoning workshops 2022 (ICCBR-WS 2022) co-located with the 30th International conference on Case-based reasoning 2022 (ICCBR 2022), 12-15 September 2022, Nancy, France. Aachen: CEUR workshop proceedings [online], 3389, pages 231-232. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_83.pdf

The right to an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. However, different system stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of ex... Read More about iSee: intelligent sharing of explanation experiences..

iSee: intelligent sharing of explanation experience of users for users. (2023)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D. and MARTIN, K. 2023. iSee: intelligent sharing of explanation experience of users for users. In IUI '23 companion: companion proceedings of the 28th Intelligent user interfaces international conference 2023 (IUI 2023), 27-31 March 2023, Sydney, Australia. New York: ACM [online], pages 79-82. Available from: https://doi.org/10.1145/3581754.3584137

The right to obtain an explanation of the decision reached by an Artificial Intelligence (AI) model is now an EU regulation. Different stakeholders of an AI system (e.g. managers, developers, auditors, etc.) may have different background knowledge, c... Read More about iSee: intelligent sharing of explanation experience of users for users..

Clinical dialogue transcription error correction using Seq2Seq models. (2022)
Conference Proceeding
NANAYAKKARA, G., WIRATURNGA, N., CORSAR, D., MARTIN, K. and WIJEKOON, A. 2022. Clinical dialogue transcription error correction using Seq2Seq models. In Shaban-Nejad, A., Michalowski, M. and Bianco, S. (eds.) Multimodal AI in healthcare: a paradigm shift in health intelligence; selected papers from the 6th International workshop on health intelligence (W3PHIAI-22), co-located with the 34th AAAI (Association for the Advancement of Artificial Intelligence) Innovative applications of artificial intelligence (IAAI-22), 28 February - 1 March 2022, [virtual event]. Studies in computational intelligence, 1060. Cham: Springer [online], pages 41-57. Available from: https://doi.org/10.1007/978-3-031-14771-5_4

Good communication is critical to good healthcare. Clinical dialogue is a conversation between health practitioners and their patients, with the explicit goal of obtaining and sharing medical information. This information contributes to medical decis... Read More about Clinical dialogue transcription error correction using Seq2Seq models..

Adapting semantic similarity methods for case-based reasoning in the Cloud. (2022)
Conference Proceeding
NKISI-ORJI, I., PALIHAWADANA, C., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2022. Adapting semantic similarity methods for case-based reasoning in the Cloud. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 125-139. Available from: https://doi.org/10.1007/978-3-031-14923-8_9

CLOOD is a cloud-based CBR framework based on a microservices architecture, which facilitates the design and deployment of case-based reasoning applications of various sizes. This paper presents advances to the similarity module of CLOOD through the... Read More about Adapting semantic similarity methods for case-based reasoning in the Cloud..

DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. (2021)
Conference Proceeding
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. In Proceedings of 33rd IEEE (Institute of Electrical and Electronics Engineers) International conference on tools with artificial intelligence 2021 (ICTAI 2021), 1-3 November 2021, Washington, USA [virtual conference]. Piscataway: IEEE [online], pages 1466-1473. Available from: https://doi.org/10.1109/ICTAI52525.2021.00233

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies tha... Read More about DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods..

FedSim: similarity guided model aggregation for federated learning. (2021)
Journal Article
PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and KALUTARAGE, H. 2022. FedSim: similarity guided model aggregation for federated learning. Neurocomputing [online], 483: distributed machine learning, optimization and applications, pages 432-445. Available from: https://doi.org/10.1016/j.neucom.2021.08.141

Federated Learning (FL) is a distributed machine learning approach in which clients contribute to learning a global model in a privacy preserved manner. Effective aggregation of client models is essential to create a generalised global model. To what... Read More about FedSim: similarity guided model aggregation for federated learning..

Autonomous CPSoS for cognitive large manufacturing industries. (2021)
Conference Proceeding
SANTOFIMIA, M.J., VILLANUEVA, F.J., CABA, J., FERNANDEZ-BERMEJO, J., DEL TORO, X., WIRATUNGA, N., TRAPERO, J.R., RUBIO, A., SALVADORI, C. and LOPEZ, J.C. 2021. Autonomous CPSoS for cognitive large manufacturing industries. In Proceedings of 47th Institute of Electrical and Electronics Engineers (IEEE) Industrial Electronics Society annual conference 2021 (IECON 2021), 13-16 October 2021, [virtual conference]. Piscataway: IEEE [online], article 9589159. Available from: https://doi.org/10.1109/IECON48115.2021.9589159

The general aim of a cognitive Cyber Physical System of Systems (CPSoS) is to provide managed access to data in a smart fashion such that sensing and actuation capabilities are connected. Whilst there is significant funding and research devoted to th... Read More about Autonomous CPSoS for cognitive large manufacturing industries..

Effectiveness of app-delivered, tailored self-management support for adults with lower back pain-related disability: a selfBACK randomized clinical trial. [Dataset] (2021)
Dataset
SANDAL, L.F., BACH, K., ØVERÅS, C.K., WIRATUNGA, N., COOPER, K, et al. 2021. Effectiveness of app-delivered, tailored self-management support for adults with lower back pain-related disability: a selfBACK randomized clinical trial. [Dataset]. JAMA internal medicine [online], 181(10), pages 1288-1296. Available from: https://jamanetwork.com/journals/jamainternalmedicine/fullarticle/2782459#supplemental-tab

SELFBACK is an evidence-based decision support system that supports self-management of nonspecific low back pain. In specific, SELFBACK provides the user with evidence-based advice on physical activity level, strength/ flexibility exercises, and educ... Read More about Effectiveness of app-delivered, tailored self-management support for adults with lower back pain-related disability: a selfBACK randomized clinical trial. [Dataset].

Effectiveness of app-delivered, tailored self-management support for adults with lower back pain-related disability: a selfBACK randomized clinical trial. (2021)
Journal Article
SANDAL, L.F., BACH, K., ØVERÅS, C.K., WIRATUNGA, N., COOPER, K, et al. 2021. Effectiveness of app-delivered, tailored self-management support for adults with lower back pain-related disability: a selfBACK randomized clinical trial. JAMA internal medicine [online], 181(10), pages 1288-1296. Available from: https://doi.org/10.1001/jamainternmed.2021.4097

Importance: Lower back pain (LBP) is a prevalent and challenging condition in primary care. The effectiveness of an individually tailored self-management support tool delivered via a smartphone app has not been rigorously tested. Objective: To invest... Read More about Effectiveness of app-delivered, tailored self-management support for adults with lower back pain-related disability: a selfBACK randomized clinical trial..

Counterfactual explanations for student outcome prediction with Moodle footprints. (2021)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., NKILSI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Counterfactual explanations for student outcome prediction with Moodle footprints. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 1, pages 1-8. Available from: http://ceur-ws.org/Vol-2894/short1.pdf

Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to one that is more desirable. For this purpose a counterfactual explainer needs to be able to reason with simila... Read More about Counterfactual explanations for student outcome prediction with Moodle footprints..

Personalised exercise recognition towards improved self-management of musculoskeletal disorders. (2021)
Thesis
WIJEKOON, A. 2021. Personalised exercise recognition towards improved self-management of musculoskeletal disorders. Robert Gordon University, PhD thesis. Hosted on OpenAIR [online]. Available from: https://doi.org/10.48526/rgu-wt-1358224

Musculoskeletal Disorders (MSD) have been the primary contributor to the global disease burden, with increased years lived with disability. Such chronic conditions require self-management, typically in the form of maintaining an active lifestyle whil... Read More about Personalised exercise recognition towards improved self-management of musculoskeletal disorders..

Evaluating explainability methods intended for multiple stakeholders. (2021)
Journal Article
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2021. Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intelligenz [online], 35(3-4), pages 397-411. Available from: https://doi.org/10.1007/s13218-020-00702-6

Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanat... Read More about Evaluating explainability methods intended for multiple stakeholders..

Personalised meta-learning for human activity recognition with few-data. (2020)
Conference Proceeding
WIJEKOON, A. and WIRATUNGA, N. 2020. Personalised meta-learning for human activity recognition with few-data. In Bramer, M. and Ellis, R. (eds.) Artificial intelligence XXXVII: proceedings of 40th British Computer Society's Specialist Group on Artificial Intelligence (SGAI) Artificial intelligence international conference 2020 (AI-2020), 15-17 December 2020, [virtual conference]. Lecture notes in artificial intelligence, 12498. Cham: Springer [online], pages 79-93. Available from: https://doi.org/10.1007/978-3-030-63799-6_6

State-of-the-art methods of Human Activity Recognition(HAR) rely on a considerable amount of labelled data to train deep architectures. This becomes prohibitive when tasked with creating models that are sensitive to personal nuances in human movement... Read More about Personalised meta-learning for human activity recognition with few-data..