Skip to main content

Research Repository

Advanced Search

Professor Nirmalie Wiratunga's Outputs (79)

Context driven multi-query resolution using LLM-RAG to support the revision of explainability needs. (2025)
Presentation / Conference Contribution
JAYAWARDENA, L., LIRET, A., WIRATUNGA, N., NKISI-ORJI, I. and FLEISCH, B. [2025]. Context driven multi-query resolution using LLM-RAG to support the revision of explainability needs. In Proceedings of the 33rd International conference on case-based reasoning (ICCBR 2025), 30 June - 3 July 2025, Biarritz, France. Lecture notes in computer science, [volume to be confirmed]. Cham: Springer [online], (accepted).

The revision step in the Case-Based Reasoning (CBR) cycle ensures that cases are adaptable and that updates can be integrated meaningfully based on evaluation metrics. However, the effectiveness of this step heavily depends on how new knowledge is ac... Read More about Context driven multi-query resolution using LLM-RAG to support the revision of explainability needs..

Understanding disagreement between humans and machines in XAI: robustness, fidelity, and region-based explanations in automatic neonatal pain assessment. (2025)
Presentation / Conference Contribution
PIRIE, C., FERREIRA, L.A., COUTRIN, G.A.S., CARLINI, L.P., MORENO-GARCÍA, C.F., BARROS, M.C.M., GUINSBURG, R., THOMAZ, C.E., NOBRE, R. and WIRATUNGA, N. Understanding disagreement between humans and machines in XAI: robustness, fidelity, and region-based explanations in automatic neonatal pain assessment. [2025]. To be presented at the 3rd World conference on eXplainable artificial intelligence 2025, 9-11 July 2025, Istanbul, Turkey.

Artificial Intelligence (AI) offers a promising approach to automating neonatal pain assessment, improving consistency and objectivity in clinical decision-making. However, differences between how humans and AI models perceive and explain pain-relate... Read More about Understanding disagreement between humans and machines in XAI: robustness, fidelity, and region-based explanations in automatic neonatal pain assessment..

AlignLLM: alignment-based evaluation using ensemble of LLMs-as-judges for Q &A. (2025)
Presentation / Conference Contribution
ABEYRATNE, R., WIRATUNGA, N., MARTIN, K., NKISI-ORJ, I. and JAYAWARDENA, L. [2025]. AlignLLM: alignment-based evaluation using ensemble of LLMs-as judges for Q&A. In Case-based reasoning research and development: proceedings of the 33rd International conference on case-based reasoning 2025 (ICCBR 2025), 30 June - 3 July 2025, Biarritz, France. Lecture notes in computer science (LNCS), TBC. Cham: Springer [online], (forthcoming).

Evaluating responses generated by large language models (LLMs) is challenging in the absence of ground-truth knowledge, particularly in specialised domains such as law. Increasingly, LLMs themselves are used to evaluate the responses they generate; h... Read More about AlignLLM: alignment-based evaluation using ensemble of LLMs-as-judges for Q &A..

SCaLe-QA: Sri Lankan case law embeddings for legal QA. (2024)
Presentation / Conference Contribution
JAYAWARDENA, L., WIRATUNGA, N., ABEYRATNE, R., MARTIN, K., NKISI-ORJI, I. and WEERASINGHE, R. 2024. SCaLe-QU: Sri Lankan case law embeddings for legal QA. In Martin, K., Salimi, P. and Wijayasekara, V. (eds.) 2024. SICSA REALLM workshop 2024: proceedings of the SICSA (Scottish Informatics and Computer Science Alliance) REALLM (Reasoning, explanation and applications of large language models) workshop (SICSA REALLM workshop 2024), 17 October 2024, Aberdeen, UK. CEUR workshop proceedings, 3822. Aachen: CEUR-WS [online], pages 47-55. Available from: https://ceur-ws.org/Vol-3822/short6.pdf

SCaLe-QA is a foundational system developed for Sri Lankan Legal Question Answering (LQA) by leveraging domain-specific embeddings derived from Supreme Court cases. The system is tailored to capture the unique linguistic and structural characteristic... Read More about SCaLe-QA: Sri Lankan case law embeddings for legal QA..

Towards improving open-box hallucination detection in large language models (LLMs). (2024)
Presentation / Conference Contribution
SURESH, M., ALJUNDI, R., NKISI-ORJI, I. and WIRATUNGA, N. 2024. Towards improving open-box hallucination detection in large language models (LLMs). In Martin, K., Salimi, P. and Wijayasekara, V. (eds.) 2024. SICSA REALLM workshop 2024: proceedings of the SICSA (Scottish Informatics and Computer Science Alliance) REALLM (Reasoning, explanation and applications of large language models) workshop (SICSA REALLM workshop 2024), 17 October 2024, Aberdeen, UK. CEUR workshop proceedings, 3822. Aachen: CEUR-WS [online], pages 1-10. Available from: https://ceur-ws.org/Vol-3822/paper1.pdf

Due to the increasing availability of Large Language Models (LLMs) through both proprietary and open-sourced releases of models, the adoption of LLMs across applications has drastically increased making them commonplace in day-to-day lives. Yet, the... Read More about Towards improving open-box hallucination detection in large language models (LLMs)..

Dual-task dialogue understanding. (2024)
Presentation / Conference Contribution
ANWAR, S., WIRATUNGA, N. and SNAITH, M. 2024. Dual-task dialogue understanding. In Martin, K., Salimi, P. and Wijayasekara, V. (eds.) 2024. SICSA REALLM workshop 2024: proceedings of the SICSA (Scottish Informatics and Computer Science Alliance) REALLM (Reasoning, explanation and applications of large language models) workshop (SICSA REALLM workshop 2024), 17 October 2024, Aberdeen, UK. CEUR workshop proceedings, 3822. Aachen: CEUR-WS [online], pages 40-46. Available from: https://ceur-ws.org/Vol-3822/short5.pdf

In dialogue systems, utterances do not occur in isolation. One conversation might involve interactions between several speakers. It's crucial to determine the intentions behind utterances in multi-party conversations when more than two interlocutors... Read More about Dual-task dialogue understanding..

Extended results for: enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models. (2024)
Presentation / Conference Contribution
OFORI-BOATENG, R., ACEVES-MARTINS, M., WIRATUNGA, N. and MORENO-GARCIA, C.F. 2024. Extended results for: enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models. In Martin, K., Salimi, P. and Wijayasekara, V. (eds.). Proceedings of the 2024 SICSA (Scottish Informatics and Computer Science Alliance) REALLM (Reasoning, explanation and applications of large language models) workshop (SICSA REALLM workshop 2024), 17 October 2024, Aberdeen, UK. CEUR workshop proceedings, 3822Aachen: CEUR-WS [online], pages 11-18. Available from: https://ceur-ws.org/Vol-3822/short1.pdf

Evidence-based medicine (EBM) is a foundational element in medical research, playing a crucial role in shaping healthcare policies and clinical decision-making. However, the rigorous processes required for EBM, particularly during the abstract screen... Read More about Extended results for: enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models..

iSee: advancing multi-shot explainable AI using case-based recommendations. (2024)
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., CORSAR, D., MARTIN, K., NKISI-ORJI, I., PALIHAWADANA, C., CARO-MARTÍNEZ, M., DÍAZ-AGUDO, B., BRIDGE, D. and LIRET, A. 2024. iSee: advancing multi-shot explainable AI using case-based recommendations. In Endriss, U., Melo, F.S., Bach, K., et al. (eds.) ECAI 2024: proceedings of the 27th European conference on artificial intelligence, co-located with the 13th conference on Prestigious applications of intelligent systems (PAIS 2024), 19–24 October 2024, Santiago de Compostela, Spain. Frontiers in artificial intelligence and applications, 392. Amsterdam: IOS Press [online], pages 4626-4633. Available from: https://doi.org/10.3233/FAIA241057

Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Recent findings suggest that a single explainer may not meet the diverse needs of multiple users in an AI system; indeed, even individual u... Read More about iSee: advancing multi-shot explainable AI using case-based recommendations..

Building personalised XAI experiences through iSee: a case-based reasoning-driven platform. (2024)
Presentation / Conference Contribution
CARO-MARTÍNEZ, M., LIRET, A., DÍAZ-AGUDO, B., RECIO-GARCÍA, J.A., DARIAS, J., WIRATUNGA, N., WIJEKOON, A., MARTIN, K., NKISI-ORJI, I., CORSAR, D., PALIHAWADANA, C., PIRIE, C., BRIDGE, D., PRADEEP, P. and FLEISCH, B. 2024. Building personalised XAI experiences through iSee: a case-based reasoning-driven platform. In Longo, L., Liu, W. and Montavon, G. (eds.) xAI-2024: LB/D/DC: joint proceedings of the xAI 2024 late-breaking work, demos and doctoral consortium, co-located with the 2nd World conference on eXplainable artificial intelligence (xAI 2024), 17-19 July 2024, Valletta, Malta. Aachen: CEUR-WS [online], 3793, pages 313-320. Available from: https://ceur-ws.org/Vol-3793/paper_40.pdf

Nowadays, eXplainable Artificial Intelligence (XAI) is well-known as an important field in Computer Science due to the necessity of understanding the increasing complexity of Artificial Intelligence (AI) systems or algorithms. This is the reason why... Read More about Building personalised XAI experiences through iSee: a case-based reasoning-driven platform..

Enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models. (2024)
Presentation / Conference Contribution
OFORI-BOATENG, R., ACEVES-MARTINS, M., WIRANTUGA, N. and MORENO-GARCIA, C.F. 2024. Enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models. In Finkelstein, J., Moskovitch, R. and Parimbelli, E. (eds.) Proceedings of the 22nd Artificial intelligence in medicine international conference 2024 (AIME 2024), 9-12 July 2024, Salt Lake City, UT, USA. Lecture notes in computer science, 14844. Cham: Springer [online], part I, pages 261-272. Available from: https://doi.org/10.1007/978-3-031-66538-7_26

Evidence-based medicine (EBM) represents a cornerstone in medical research, guiding policy and decision-making. However, the robust steps involved in EBM, particularly in the abstract screening stage, present significant challenges to researchers. Nu... Read More about Enhancing abstract screening classification in evidence-based medicine: incorporating domain knowledge into pre-trained models..

A zero-shot monolingual dual stage information retrieval system for Spanish biomedical systematic literature reviews. (2024)
Presentation / Conference Contribution
OFORI-BOATENG, R., ACEVES-MARTINS, M., WIRATUNGA, N. and MORENO-GARCIA, C. 2024. A zero-shot monolingual dual stage information retrieval system for Spanish biomedical systematic literature reviews. In Duh, K., Gomez, H. and Bethard, S. (eds.) Proceedings of the 2024 North American Chapter of the Association for Computational Linguistics conference (NAACL 2024): human language technologies, 16-21 June 2024, Mexico City, Mexico. Stroudsburg, PA: ACL [online], volume 1: long papers, pages 3725-3736. Available from: https://doi.org/10.18653/v1/2024.naacl-long.206

Systematic Reviews (SRs) are foundational in healthcare for synthesising evidence to inform clinical practices. Traditionally skewed towards English-language databases, SRs often exclude significant research in other languages, leading to potential b... Read More about A zero-shot monolingual dual stage information retrieval system for Spanish biomedical systematic literature reviews..

CBR-RAG: case-based reasoning for retrieval augmented generation in LLMs for legal question answering. (2024)
Presentation / Conference Contribution
WIRATUNGA, N., ABEYRATNE, R., JAYAWARDENA, L., MARTIN, K., MASSIE, S., NKISI-ORJI, I., WEERASINGHE, R., LIRET, A. and FLEISCH, B. 2024. CBR-RAG: case-based reasoning for retrieval augmented generation in LLMs for legal question answering. In Recio-Garcia, J.A., Orozco-del-Castillo, M.G. and Bridge, D (eds.) Case-based reasoning research and development: proceedings of the 32nd International conference of case-based reasoning research and development 2024 (ICCBR 2024), 1-4 July 2024, Merida, Mexico. Lecture notes in computer science, 14775. Cham: Springer [online], pages 445-460. Available from: https://doi.org/10.1007/978-3-031-63646-2_29

Retrieval-Augmented Generation (RAG) enhances Large Language Model (LLM) output by providing prior knowledge as context to input. This is beneficial for knowledge-intensive and expert reliant tasks, including legal question-answering, which require e... Read More about CBR-RAG: case-based reasoning for retrieval augmented generation in LLMs for legal question answering..

Mitigating gradient inversion attacks in federated learning with frequency transformation. (2024)
Presentation / Conference Contribution
PALIHAWADANA, C., WIRATUNGA, N., KALUTARAGE, H. and WIJEKOON, A. 2024. Mitigating gradient inversion attacks in federated learning with frequency transformation. In Katsikas, S. et al. (eds.) Computer security: revised selected papers from the proceedings of the International workshops of the 28th European symposium on research in computer security (ESORICS 2023 International Workshops), 25-29 September 2023, The Hague, Netherlands. Lecture notes in computer science, 14399. Cham: Springer [online], part II, pages 750-760. Available from: https://doi.org/10.1007/978-3-031-54129-2_44

Centralised machine learning approaches have raised concerns regarding the privacy of client data. To address this issue, privacy-preserving techniques such as Federated Learning (FL) have emerged, where only updated gradients are communicated instea... Read More about Mitigating gradient inversion attacks in federated learning with frequency transformation..

Clinical dialogue transcription error correction with self-supervision. (2023)
Presentation / Conference Contribution
NANAYAKKARA, G., WIRATUNGA, N., CORSAR, D., MARTIN, K. and WIJEKOON, A. 2023. Clinical dialogue transcription error correction with self-supervision. In Bramer, M. and Stahl, F. (eds.) Artificial intelligence XL: proceedings of the 43rd SGAI international conference on artificial intelligence (AI-2023), 12-14 December 2023, Cambridge, UK. Lecture notes in computer science, 14381. Cham: Springer [online], pages 33-46. Available from: https://doi.org/10.1007/978-3-031-47994-6_3

A clinical dialogue is a conversation between a clinician and a patient to share medical information, which is critical in clinical decision-making. The reliance on manual note-taking is highly inefficient and leads to transcription errors when digit... Read More about Clinical dialogue transcription error correction with self-supervision..

Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method. (2023)
Presentation / Conference Contribution
SALIMI, P., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2023. Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method. In Gal, K., Nowé, A., Nalepa, G.J., Fairstein, R. and Rădulescu, R. (eds.) ECAI 2023: proceedings of the 26th European conference on artificial intelligence (ECAI 2023), 30 September - 4 October 2023, Kraków, Poland. Frontiers in artificial intelligence and applications, 372. Amsterdam: IOS Press [online], pages 2057-2064. Available from: https://doi.org/10.3233/FAIA230499

Counterfactual Explanations (cf-XAI) describe the smallest changes in feature values necessary to change an outcome from one class to another. However, many cf-XAI methods neglect the feasibility of those changes. In this paper, we introduce a novel... Read More about Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method..

Proceedings of the 6th International workshop on knowledge discovery from healthcare data (KDH@IJCAI 2023). (2023)
Presentation / Conference Contribution
IBRAHIM, Z., WU, H. and WIRATUNGA, N. (eds.) 2023. Proceedings of the 6th International workshop on knowledge discovery from healthcare data (KDH@IJCAI 2023), co-located with the 32nd International joint conference on artificial intelligence (IJCAI 2023), 20 August 2023, Macao, China. CEUR workshop proceedings, 3479. Aachen: CEUR-WS [online]. Available from: https://ceur-ws.org/Vol-3479/

This workshop is centred around novel AI methodologies that aim to solve some of the grand challenges associated with medical data. Held in conjunction with the International Joint Conference on Artificial Intelligence (IJCAI 2023), this year's works... Read More about Proceedings of the 6th International workshop on knowledge discovery from healthcare data (KDH@IJCAI 2023)..

Evaluation of attention-based LSTM and Bi-LSTM networks for abstract text classification in systematic literature review automation. (2023)
Presentation / Conference Contribution
OFORI-BOATENG, R., ACEVES-MARTINS, M., JAYNE, C., WIRATUNGA, N. and MORENO-GARCIA, C.F. 2023. Evaluation of attention-based LSTM and Bi-LSTM networks for abstract text classification in systematic literature review automation. Porcedia computer science [online], 222: selected papers from the 2023 International Neural Network Society workshop on deep learning innovations and applications (INNS DLIA 2023), co-located with the 2023 International joint conference on neural networks (IJCNN), 18-23 June 2023, Gold Coast, Australia, pages 114-126. Available from: https://doi.org/10.1016/j.procs.2023.08.149

Systematic Review (SR) presents the highest form of evidence in research for decision and policy-making. Nonetheless, the structured steps involved in carrying out SRs make it demanding for reviewers. Many studies have projected the abstract screenin... Read More about Evaluation of attention-based LSTM and Bi-LSTM networks for abstract text classification in systematic literature review automation..

CBR driven interactive explainable AI. (2023)
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., MARTIN, K., CORSAR, D., NKISI-ORJI, I., PALIHAWADANA, C., BRIDGE, D., PRADEEP, P., AGUDO, B.D. and CARO-MARTÍNEZ, M. 2023. CBR driven interactive explainable AI. In MASSIE, S. and CHAKRABORTI, S. (eds.) 2023. Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023, (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages169-184. Available from: https://doi.org/10.1007/978-3-031-40177-0_11

Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requi... Read More about CBR driven interactive explainable AI..

Failure-driven transformational case reuse of explanation strategies in CloodCBR. (2023)
Presentation / Conference Contribution
NKISI-ORJI, I., PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and CORSAR, D. 2023. Failure-driven transformational case reuse of explanation strategies in CloodCBR. In Massie, S. and Chakraborti, S. (eds.) Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023 (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages 279-293. Available from: https://doi.org/10.1007/978-3-031-40177-0_18

In this paper, we propose a novel approach to improve problem-solving efficiency through the reuse of case solutions. Specifically, we introduce the concept of failure-driven transformational case reuse of explanation strategies, which involves trans... Read More about Failure-driven transformational case reuse of explanation strategies in CloodCBR..

AGREE: a feature attribution aggregation framework to address explainer disagreements with alignment metrics. (2023)
Presentation / Conference Contribution
PIRIE, C., WIRATUNGA, N., WIJEKOON, A. and MORENO-GARCIA, C.F. 2023. AGREE: a feature attribution aggregation framework to address explainer disagreements with alignment metrics. In Malburg, L. and Verma, D. (eds.) Workshop proceedings of the 31st International conference on case-based reasoning (ICCBR-WS 2023), 17 July 2023, Aberdeen, UK. CEUR workshop proceedings, 3438. Aachen: CEUR-WS [online], pages 184-199. Available from: https://ceur-ws.org/Vol-3438/paper_14.pdf

As deep learning models become increasingly complex, practitioners are relying more on post hoc explanation methods to understand the decisions of black-box learners. However, there is growing concern about the reliability of feature attribution expl... Read More about AGREE: a feature attribution aggregation framework to address explainer disagreements with alignment metrics..