Professor Nirmalie Wiratunga n.wiratunga@rgu.ac.uk
Associate Dean for Research
Professor Nirmalie Wiratunga n.wiratunga@rgu.ac.uk
Associate Dean for Research
Dr Kyle Martin k.martin3@rgu.ac.uk
Lecturer
Dr Anjana Wijekoon
Dr David Corsar d.corsar1@rgu.ac.uk
Senior Lecturer
iSee: advancing multi-shot explainable AI using case-based recommendations. (2024)
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., CORSAR, D., MARTIN, K., NKISI-ORJI, I., PALIHAWADANA, C., CARO-MARTÍNEZ, M., DÍAZ-AGUDO, B., BRIDGE, D. and LIRET, A. 2024. iSee: advancing multi-shot explainable AI using case-based recommendations. In Endriss, U., Melo, F.S., Bach, K., et al. (eds.) ECAI 2024: proceedings of the 27th European conference on artificial intelligence, co-located with the 13th conference on Prestigious applications of intelligent systems (PAIS 2024), 19–24 October 2024, Santiago de Compostela, Spain. Frontiers in artificial intelligence and applications, 392. Amsterdam: IOS Press [online], pages 4626-4633. Available from: https://doi.org/10.3233/FAIA241057Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Recent findings suggest that a single explainer may not meet the diverse needs of multiple users in an AI system; indeed, even individual u... Read More about iSee: advancing multi-shot explainable AI using case-based recommendations..
Building personalised XAI experiences through iSee: a case-based reasoning-driven platform. (2024)
Presentation / Conference Contribution
CARO-MARTÍNEZ, M., LIRET, A., DÍAZ-AGUDO, B., RECIO-GARCÍA, J.A., DARIAS, J., WIRATUNGA, N., WIJEKOON, A., MARTIN, K., NKISI-ORJI, I., CORSAR, D., PALIHAWADANA, C., PIRIE, C., BRIDGE, D., PRADEEP, P. and FLEISCH, B. 2024. Building personalised XAI experiences through iSee: a case-based reasoning-driven platform. In Longo, L., Liu, W. and Montavon, G. (eds.) xAI-2024: LB/D/DC: joint proceedings of the xAI 2024 late-breaking work, demos and doctoral consortium, co-located with the 2nd World conference on eXplainable artificial intelligence (xAI 2024), 17-19 July 2024, Valletta, Malta. Aachen: CEUR-WS [online], 3793, pages 313-320. Available from: https://ceur-ws.org/Vol-3793/paper_40.pdfNowadays, eXplainable Artificial Intelligence (XAI) is well-known as an important field in Computer Science due to the necessity of understanding the increasing complexity of Artificial Intelligence (AI) systems or algorithms. This is the reason why... Read More about Building personalised XAI experiences through iSee: a case-based reasoning-driven platform..
iSee: a case-based reasoning platform for the design of explanation experiences. (2024)
Journal Article
CARO-MARTÍNEZ, M., RECIO-GARCÍA, J.A., DÍAZ-AGUDO, B., DARIAS, J.M., WIRATUNGA, N., MARTIN, K., WIJEKOON, A., NKISI-ORJI, I., CORSAR, D., PRADEEP, P., BRIDGE, D. and LIRET, A. 2024. iSee: a case-based reasoning platform for the design of explanation experiences. Knowledge-based systems [online], 302, article number 112305. Available from: https://doi.org/10.1016/j.knosys.2024.112305Explainable Artificial Intelligence (XAI) is an emerging field within Artificial Intelligence (AI) that has provided many methods that enable humans to understand and interpret the outcomes of AI systems. However, deciding on the best explanation app... Read More about iSee: a case-based reasoning platform for the design of explanation experiences..
A practical exploration of the convergence of case-based reasoning and explainable artificial intelligence. (2024)
Journal Article
PRADEEP, P., CARO-MARTÍNEZ, M. and WIJEKOON, A. 2024. A practical exploration of the convergence of case-based reasoning and explainable artificial intelligence. Expert systems with applications [online], 255(part D), article number 124733. Available from: https://doi.org/10.1016/j.eswa.2024.124733As Artificial Intelligence (AI) systems become increasingly complex, ensuring their decisions are transparent and understandable to users has become paramount. This paper explores the integration of Case-Based Reasoning (CBR) with Explainable Artific... Read More about A practical exploration of the convergence of case-based reasoning and explainable artificial intelligence..
Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method. (2023)
Presentation / Conference Contribution
SALIMI, P., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2023. Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method. In Gal, K., Nowé, A., Nalepa, G.J., Fairstein, R. and Rădulescu, R. (eds.) ECAI 2023: proceedings of the 26th European conference on artificial intelligence (ECAI 2023), 30 September - 4 October 2023, Kraków, Poland. Frontiers in artificial intelligence and applications, 372. Amsterdam: IOS Press [online], pages 2057-2064. Available from: https://doi.org/10.3233/FAIA230499Counterfactual Explanations (cf-XAI) describe the smallest changes in feature values necessary to change an outcome from one class to another. However, many cf-XAI methods neglect the feasibility of those changes. In this paper, we introduce a novel... Read More about Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method..
CBR driven interactive explainable AI. (2023)
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., MARTIN, K., CORSAR, D., NKISI-ORJI, I., PALIHAWADANA, C., BRIDGE, D., PRADEEP, P., AGUDO, B.D. and CARO-MARTÍNEZ, M. 2023. CBR driven interactive explainable AI. In MASSIE, S. and CHAKRABORTI, S. (eds.) 2023. Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023, (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages169-184. Available from: https://doi.org/10.1007/978-3-031-40177-0_11Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requi... Read More about CBR driven interactive explainable AI..
Failure-driven transformational case reuse of explanation strategies in CloodCBR. (2023)
Presentation / Conference Contribution
NKISI-ORJI, I., PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and CORSAR, D. 2023. Failure-driven transformational case reuse of explanation strategies in CloodCBR. In Massie, S. and Chakraborti, S. (eds.) Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023 (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages 279-293. Available from: https://doi.org/10.1007/978-3-031-40177-0_18In this paper, we propose a novel approach to improve problem-solving efficiency through the reuse of case solutions. Specifically, we introduce the concept of failure-driven transformational case reuse of explanation strategies, which involves trans... Read More about Failure-driven transformational case reuse of explanation strategies in CloodCBR..
A user-centred evaluation of DisCERN: discovering counterfactuals for code vulnerability detection and correction. (2023)
Journal Article
WIJEKOON, A. and WIRATUNGA, N. 2023. A user-centred evaluation of DisCERN: discovering counterfactuals for code vulnerability detection and correction. Knowledge-based systems [online], 278, article 110830. Available from: https://doi.org/10.1016/j.knosys.2023.110830Counterfactual explanations highlight actionable knowledge which helps to understand how a machine learning model outcome could be altered to a more favourable outcome. Understanding actionable corrections in source code analysis can be critical to p... Read More about A user-centred evaluation of DisCERN: discovering counterfactuals for code vulnerability detection and correction..
The current and future role of visual question answering in eXplainable artificial intelligence. (2023)
Presentation / Conference Contribution
CARO-MARTINEZ, M., WIJEKOON, A., DIAZ-AGUDO, B. and RECIO-GARCIA, J.A. 2023. The current and future role of visual question answering in eXplainable artificial intelligence. In Malburg, L. and Verma, D. (eds.) Workshop proceedings of the 31st International conference on case-based reasoning (ICCBR-WS 2023), 17 July 2023, Aberdeen, UK. CEUR workshop proceedings, 3438. Aachen: CEUR-WS [online], pages 172-183. Available from: https://ceur-ws.org/Vol-3438/paper_13.pdfOver the last few years, we have seen how the interest of the computer science research community on eXplainable Artificial Intelligence has grown in leaps and bounds. The reason behind this rise is the use of Artificial Intelligence in many daily li... Read More about The current and future role of visual question answering in eXplainable artificial intelligence..
Conceptual modelling of explanation experiences through the iSeeonto ontology. (2023)
Presentation / Conference Contribution
CARO-MARTÍNEZ, M., WIJEKOON, A., RECIO-GARCÍA, J.A., CORSAR, D. and NKISI-ORJI, I. 2022. Conceptual modelling of explanation experiences through the iSeeonto ontology. In Reuss, P. and Schönborn, J. (eds.) Workshop proceedings of the 30th International conference on case-based reasoning (ICCBR-WS 2022), 12-15 September 2022, Nancy, France. CEUR workshop proceedings, 3389. Aachen: CEUR-WS [online], pages 117-128. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_86.pdfExplainable Artificial Intelligence is a big research field required in many situations where we need to understand Artificial Intelligence behaviour. However, each explanation need is unique which makes it difficult to apply explanation techniques a... Read More about Conceptual modelling of explanation experiences through the iSeeonto ontology..
About OpenAIR@RGU
Administrator e-mail: publications@rgu.ac.uk
This application uses the following open-source libraries:
Apache License Version 2.0 (http://www.apache.org/licenses/)
Apache License Version 2.0 (http://www.apache.org/licenses/)
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search