Skip to main content

Research Repository

Advanced Search

Outputs (29)

Extended results for: enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models. (2024)
Presentation / Conference Contribution
OFORI-BOATENG, R., ACEVES-MARTINS, M., WIRATUNGA, N. and MORENO-GARCIA, C.F. 2024. Extended results for: enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models. In Martin, K., Salimi, P. and Wijayasekara, V. (eds.). Proceedings of the 2024 SICSA (Scottish Informatics and Computer Science Alliance) REALLM (Reasoning, explanation and applications of large language models) workshop (SICSA REALLM workshop 2024), 17 October 2024, Aberdeen, UK. CEUR workshop proceedings, 3822Aachen: CEUR-WS [online], pages 11-18. Available from: https://ceur-ws.org/Vol-3822/short1.pdf

Evidence-based medicine (EBM) is a foundational element in medical research, playing a crucial role in shaping healthcare policies and clinical decision-making. However, the rigorous processes required for EBM, particularly during the abstract screen... Read More about Extended results for: enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models..

Enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models. (2024)
Presentation / Conference Contribution
OFORI-BOATENG, R., ACEVES-MARTINS, M., WIRANTUGA, N. and MORENO-GARCIA, C.F. 2024. Enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models. In Finkelstein, J., Moskovitch, R. and Parimbelli, E. (eds.) Proceedings of the 22nd Artificial intelligence in medicine international conference 2024 (AIME 2024), 9-12 July 2024, Salt Lake City, UT, USA. Lecture notes in computer science, 14844. Cham: Springer [online], part I, pages 261-272. Available from: https://doi.org/10.1007/978-3-031-66538-7_26

Evidence-based medicine (EBM) represents a cornerstone in medical research, guiding policy and decision-making. However, the robust steps involved in EBM, particularly in the abstract screening stage, present significant challenges to researchers. Nu... Read More about Enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models..

Using artificial intelligence methods for systematic review in health sciences: a systematic review. (2022)
Journal Article
BLAIZOT, A., VEETTIL, S.K., SAIDOUNG, P., MORENO-GARCIA, C.F., WIRATUNGA, N., ACEVES-MARTINS, M., LAI, N.M. and CHAIYAKUNAPRUK, N. 2022. Using artificial intelligence methods for systematic review in health sciences: a systematic review. Research synthesis methods [online], 13(3), pages 353-362. Available from: https://doi.org/10.1002/jrsm.1553

The exponential increase in published articles makes a thorough and expedient review of literature increasingly challenging. This review delineated automated tools and platforms that employ artificial intelligence (AI) approaches and evaluated the re... Read More about Using artificial intelligence methods for systematic review in health sciences: a systematic review..

Similarity and explanation for dynamic telecommunication engineer support. (2021)
Thesis
MARTIN, K. 2021. Similarity and explanation for dynamic telecommunication engineer support. Robert Gordon University, PhD thesis. Hosted on OpenAIR [online]. Available from: https://doi.org/10.48526/rgu-wt-1447160

Understanding similarity between different examples is a crucial aspect of Case-Based Reasoning (CBR) systems, but learning representations optimised for similarity comparisons can be difficult. CBR systems typically rely on separate algorithms to le... Read More about Similarity and explanation for dynamic telecommunication engineer support..

Evaluating explainability methods intended for multiple stakeholders. (2021)
Journal Article
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2021. Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intelligenz [online], 35(3-4), pages 397-411. Available from: https://doi.org/10.1007/s13218-020-00702-6

Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanat... Read More about Evaluating explainability methods intended for multiple stakeholders..

DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods.
Presentation / Conference Contribution
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. In Proceedings of 33rd IEEE (Institute of Electrical and Electronics Engineers) International conference on tools with artificial intelligence 2021 (ICTAI 2021), 1-3 November 2021, Washington, USA [virtual conference]. Piscataway: IEEE [online], pages 1466-1473. Available from: https://doi.org/10.1109/ICTAI52525.2021.00233

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies tha... Read More about DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods..

Assessing the clinicians’ pathway to embed artificial intelligence for assisted diagnostics of fracture detection.
Presentation / Conference Contribution
MORENO-GARCÍA, C.F., DANG, T., MARTIN, K., PATEL, M., THOMPSON, A., LEISHMAN, L. and WIRATUNGA, N. 2020. Assessing the clinicians’ pathway to embed artificial intelligence for assisted diagnostics of fracture detection. In Bach, K., Bunescu, R., Marling, C. and Wiratunga, N. (eds.) Knowledge discovery in healthcare data 2020: proceedings of the 5th Knowledge discovery in healthcare data international workshop 2020 (KDH 2020), co-located with 24th European Artificial intelligence conference (ECAI 2020), 29-30 August 2020, [virtual conference]. CEUR workshop proceedings, 2675. Aachen: CEUR-WS [online], pages 63-70. Available from: http://ceur-ws.org/Vol-2675/paper10.pdf

Fracture detection has been a long-standingparadigm on the medical imaging community. Many algo-rithms and systems have been presented to accurately detectand classify images in terms of the presence and absence offractures in different parts of the... Read More about Assessing the clinicians’ pathway to embed artificial intelligence for assisted diagnostics of fracture detection..

Failure-driven transformational case reuse of explanation strategies in CloodCBR.
Presentation / Conference Contribution
NKISI-ORJI, I., PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and CORSAR, D. 2023. Failure-driven transformational case reuse of explanation strategies in CloodCBR. In Massie, S. and Chakraborti, S. (eds.) Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023 (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages 279-293. Available from: https://doi.org/10.1007/978-3-031-40177-0_18

In this paper, we propose a novel approach to improve problem-solving efficiency through the reuse of case solutions. Specifically, we introduce the concept of failure-driven transformational case reuse of explanation strategies, which involves trans... Read More about Failure-driven transformational case reuse of explanation strategies in CloodCBR..

iSee: intelligent sharing of explanation experiences.
Presentation / Conference Contribution
MARTIN, K., WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D., DÍAZ-AGUDO, B., RECIO-GARCÍA, J.A., CARO-MARTÍNEZ, M., BRIDGE, D., PRADEEP, P., LIRET, A. and FLEISCH, B. 2022. iSee: intelligent sharing of explanation experiences. In Reuss, P. and Schönborn, J. (eds.) Workshop proceedings of the 30th International conference on case-based reasoning (ICCBR-WS 2022), 12-15 September 2022, Nancy, France. CEUR workshop proceedings, 3389. Aachen: CEUR-WS [online], pages 231-232. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_83.pdf

The right to an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. However, different system stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of ex... Read More about iSee: intelligent sharing of explanation experiences..

Introducing Clood CBR: a cloud based CBR framework.
Presentation / Conference Contribution
PALIHAWADANA, C., NKISI-ORJI, I., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2022. Introducing Clood CBR: a cloud based CBR framework. In Reuss, P. and Schönborn, J. (eds.) Workshop proceedings of the 30th International conference on case-based reasoning (ICCBR-WS 2022), 12-15 September 2022, Nancy, France. CEUR workshop proceedings, 3389. Aachen: CEUR-WS [online], pages 233-234. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_108.pdf

CBR applications have been deployed in a wide range of sectors, from pharmaceuticals; to defence and aerospace to IoT and transportation, to poetry and music generation; for example. However, a majority of applications have been built using monolithi... Read More about Introducing Clood CBR: a cloud based CBR framework..

Machine learning for risk stratification of diabetic foot ulcers using biomarkers.
Presentation / Conference Contribution
MARTIN, K., UPADHYAY, A., WIJEKOON, A., WIRATUNGA, N. and MASSIE, S. 2023. Machine learning for risk stratification of diabetic foot ulcers using biomarkers. In Mikyška, J., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M. (eds) Computational science: proceedings of the 23rd International conference on computational science 2023 (ICCS 2023): computing at the cutting edge of science (ICCS 2023), 3-5 July 2023, Prague, Czech Republic: [virtual event]. Lecture notes in computer science, 14075. Cham: Springer [online], part III, pages 153-161. Available from: https://doi.org/10.1007/978-3-031-36024-4_11

Development of a Diabetic Foot Ulcer (DFU) causes a sharp decline in a patient's health and quality of life. The process of risk stratification is crucial for informing the care that a patient should receive to help manage their Diabetes before an ul... Read More about Machine learning for risk stratification of diabetic foot ulcers using biomarkers..

iSee: intelligent sharing of explanation experience of users for users.
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D. and MARTIN, K. 2023. iSee: intelligent sharing of explanation experience of users for users. In IUI '23 companion: companion proceedings of the 28th Intelligent user interfaces international conference 2023 (IUI 2023), 27-31 March 2023, Sydney, Australia. New York: ACM [online], pages 79-82. Available from: https://doi.org/10.1145/3581754.3584137

The right to obtain an explanation of the decision reached by an Artificial Intelligence (AI) model is now an EU regulation. Different stakeholders of an AI system (e.g. managers, developers, auditors, etc.) may have different background knowledge, c... Read More about iSee: intelligent sharing of explanation experience of users for users..

A case-based approach for content planning in data-to-text generation.
Presentation / Conference Contribution
UPADHYAY, A. and MASSIE, S. 2022. A case-based approach for content planning in data-to-text generation. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 380-394. Available from: https://doi.org/10.1007/978-3-031-14923-8_25

The problem of Data-to-Text Generation (D2T) is usually solved using a modular approach by breaking the generation process into some variant of planning and realisation phases. Traditional methods have been very good at producing high quality texts b... Read More about A case-based approach for content planning in data-to-text generation..

How close is too close? Role of feature attributions in discovering counterfactual explanations.
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., NKISI-ORJI, I., PALIHAWADANA, C., CORSAR, D. and MARTIN, K. 2022. How close is too close? Role of feature attributions in discovering counterfactual explanations. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 33-47. Available from: https://doi.org/10.1007/978-3-031-14923-8_3

Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, counterfactuals are "actionable" explanations that help users to understand how model decisions can be changed by adapting features of an input. A cas... Read More about How close is too close? Role of feature attributions in discovering counterfactual explanations..

Clinical dialogue transcription error correction using Seq2Seq models.
Presentation / Conference Contribution
NANAYAKKARA, G., WIRATURNGA, N., CORSAR, D., MARTIN, K. and WIJEKOON, A. 2022. Clinical dialogue transcription error correction using Seq2Seq models. In Shaban-Nejad, A., Michalowski, M. and Bianco, S. (eds.) Multimodal AI in healthcare: a paradigm shift in health intelligence; selected papers from the 6th International workshop on health intelligence (W3PHIAI-22), co-located with the 34th AAAI (Association for the Advancement of Artificial Intelligence) Innovative applications of artificial intelligence (IAAI-22), 28 February - 1 March 2022, [virtual event]. Studies in computational intelligence, 1060. Cham: Springer [online], pages 41-57. Available from: https://doi.org/10.1007/978-3-031-14771-5_4

Good communication is critical to good healthcare. Clinical dialogue is a conversation between health practitioners and their patients, with the explicit goal of obtaining and sharing medical information. This information contributes to medical decis... Read More about Clinical dialogue transcription error correction using Seq2Seq models..

CBR driven interactive explainable AI.
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., MARTIN, K., CORSAR, D., NKISI-ORJI, I., PALIHAWADANA, C., BRIDGE, D., PRADEEP, P., AGUDO, B.D. and CARO-MARTÍNEZ, M. 2023. CBR driven interactive explainable AI. In MASSIE, S. and CHAKRABORTI, S. (eds.) 2023. Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023, (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages169-184. Available from: https://doi.org/10.1007/978-3-031-40177-0_11

Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requi... Read More about CBR driven interactive explainable AI..

Actionable feature discovery in counterfactuals using feature relevance explainers.
Presentation / Conference Contribution
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Actionable feature discovery in counterfactuals using feature relevance explainers. In Borck, H., Eisenstadt, V., Sánchez-Ruiz, A. and Floyd, M. (eds.) Workshop proceedings of the 29th International conference on case-based reasoning (ICCBR-WS 2021), 13-16 September 2021, [virtual event]. CEUR workshop proceedings, 3017. Aachen: CEUR-WS [online], pages 63-74. Available from: http://ceur-ws.org/Vol-3017/101.pdf

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machine Learning model outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to be able to reason with si... Read More about Actionable feature discovery in counterfactuals using feature relevance explainers..

Matching networks for personalised human activity recognition.
Presentation / Conference Contribution
SANI, S., WIRATUNGA, N., MASSIE, S. and COOPER, K. 2018. Matching networks for personalised human activity recognition. In Bichindaritz, I., Guttmann, C., Herrero, P., Koch, F., Koster, A., Lenz, R., López Ibáñez, B., Marling, C., Martin, C., Montagna, S., Montani, S., Reichert, M., Riaño, D., Schumacher, M.I., ten Teije, A. and Wiratunga, N. (eds.) Proceedings of the 1st Joint workshop on artificial intelligence in health, organized as part of the Federated AI meeting (FAIM 2018), co-located with the 17th International conference on autonomous agents and multiagent systems (AAMAS 2018), the 35th International conference on machine learning (ICML 2018), the 27th International joint conference on artificial intelligence (IJCAI 2018), and the 26th International conference on case-based reasoning (ICCBR 2018), 13-19 July 2018, Stockholm, Sweden. CEUR workshop proceedings, 2142. Aachen: CEUR-WS [online], pages 61-64. Available from: http://ceur-ws.org/Vol-2142/short4.pdf

Human Activity Recognition (HAR) has many important applications in health care which include management of chronic conditions and patient rehabilitation. An important consideration when training HAR models is whether to use training data from a gene... Read More about Matching networks for personalised human activity recognition..

Counterfactual explanations for student outcome prediction with Moodle footprints.
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., NKILSI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Counterfactual explanations for student outcome prediction with Moodle footprints. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 1, pages 1-8. Available from: http://ceur-ws.org/Vol-2894/short1.pdf

Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to one that is more desirable. For this purpose a counterfactual explainer needs to be able to reason with simila... Read More about Counterfactual explanations for student outcome prediction with Moodle footprints..

AGREE: a feature attribution aggregation framework to address explainer disagreements with alignment metrics.
Presentation / Conference Contribution
PIRIE, C., WIRATUNGA, N., WIJEKOON, A. and MORENO-GARCIA, C.F. 2023. AGREE: a feature attribution aggregation framework to address explainer disagreements with alignment metrics. In Malburg, L. and Verma, D. (eds.) Workshop proceedings of the 31st International conference on case-based reasoning (ICCBR-WS 2023), 17 July 2023, Aberdeen, UK. CEUR workshop proceedings, 3438. Aachen: CEUR-WS [online], pages 184-199. Available from: https://ceur-ws.org/Vol-3438/paper_14.pdf

As deep learning models become increasingly complex, practitioners are relying more on post hoc explanation methods to understand the decisions of black-box learners. However, there is growing concern about the reliability of feature attribution expl... Read More about AGREE: a feature attribution aggregation framework to address explainer disagreements with alignment metrics..