Skip to main content

Research Repository

Advanced Search

Failure-driven transformational case reuse of explanation strategies in CloodCBR. (2023)
Presentation / Conference
NKISI-ORJI, I., PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and CORSAR, D. 2023. Failure-driven transformational case reuse of explanation strategies in CloodCBR. In Proceedings of the 31st International conference on case-based reasoning 2023 (ICCBR-2023): CBR in a data-driven world, 17-20 July 2023, Aberdeen, UK. Aberdeen: ICCBR [online], paper 69. Available from: https://delegate.iccbr2023.org/res/paper_69.pdf

In this paper, we propose a novel approach to improve problem-solving efficiency through the reuse of case solutions. Specifically, we introduce the concept of failure-driven transformational case reuse of explanation strategies, which involves trans... Read More about Failure-driven transformational case reuse of explanation strategies in CloodCBR..

CBR driven interactive explainable AI. (2023)
Presentation / Conference
WIJEKOON, A., WIRATUNGA, N., MARTIN, K., CORSAR, D., NKISI-ORJI, I., PALIHAWADANA, C., BRIDGE, D., PRADEEP, P., AGUDO, B.D. and CARO-MARTÍNEZ, M. [2023]. CBR driven interactive explainable AI. In Proceedings of the 31st International conference on case-based reasoning 2023 (ICCBR 2023): CBR in a data-driven world, 17-20 July 2023, Aberdeen, UK. Aberdeen: ICCBR [online], paper 70. Available from: https://delegate.iccbr2023.org/res/paper_70.pdf

Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requi... Read More about CBR driven interactive explainable AI..

Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method. (2023)
Conference Proceeding
SALIMI, P., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2023. Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method. To be presented at the 26th European conference on artificial intelligence 2023 (ECAI-2023), 30 September - 5 October 2023, Kraków, Poland.

Counterfactual Explanations (cf-XAI) describe the smallest changes in feature values necessary to change an outcome from one class to another. However, presently many cf-XAI methods neglect the feasibility of those changes. In this paper, we introduc... Read More about Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method..

Introducing Clood CBR: a cloud based CBR framework. (2023)
Conference Proceeding
PALIHAWADANA, C., NKISI-ORJI, I., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2022. Introducing Clood CBR: a cloud based CBR framework. In Reuss, P. and Schönborn, J. (eds.) ICCBR-WS 2022: proceedings of the 30th International conference on Case-based reasoning workshops 2022 (ICCBR-WS 2022) co-located with the 30th International conference on Case-based reasoning 2022 (ICCBR 2022), 12-15 September 2022, Nancy, France. Aachen: CEUR workshop proceedings [online], 3389, pages 233-234. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_108.pdf

CBR applications have been deployed in a wide range of sectors, from pharmaceuticals; to defence and aerospace to IoT and transportation, to poetry and music generation; for example. However, a majority of applications have been built using monolithi... Read More about Introducing Clood CBR: a cloud based CBR framework..

iSee: intelligent sharing of explanation experiences. (2023)
Conference Proceeding
MARTIN, K., WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJIC, I., CORSAR, D., DÍAZ-AGUDO, B., RECIO-GARCÍA, J.A., CARO-MARTÍNEZ, M., BRIDGE, D., PRADEEP, P., LIRET, A. and FLEISCH, B. 2022. iSee: intelligent sharing of explanation experiences. In Reuss, P. and Schönborn, J. (eds.) ICCBR-WS 2022: proceedings of the 30th International conference on Case-based reasoning workshops 2022 (ICCBR-WS 2022) co-located with the 30th International conference on Case-based reasoning 2022 (ICCBR 2022), 12-15 September 2022, Nancy, France. Aachen: CEUR workshop proceedings [online], 3389, pages 231-232. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_83.pdf

The right to an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. However, different system stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of ex... Read More about iSee: intelligent sharing of explanation experiences..

Conceptual modelling of explanation experiences through the iSeeonto ontology. (2023)
Conference Proceeding
CARO-MARTÍNEZ, M., WIJEKOON, A., RECIO-GARCÍA, J.A., CORSAR, D. and NKISI-ORJI, I. 2022. Conceptual modelling of explanation experiences through the iSeeonto ontology. In Reuss, P. and Schönborn, J. (eds.) ICCBR-WS 2022: proceedings of the 30th International conference on Case-based reasoning workshops 2022 (ICCBR-WS 2022) co-located with the 30th International conference on Case-based reasoning 2022 (ICCBR 2022), 12-15 September 2022, Nancy, France. Aachen: CEUR workshop proceedings [online], 3389, pages 117-128. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_86.pdf

Explainable Artificial Intelligence is a big research field required in many situations where we need to understand Artificial Intelligence behaviour. However, each explanation need is unique which makes it difficult to apply explanation techniques a... Read More about Conceptual modelling of explanation experiences through the iSeeonto ontology..

iSee: demonstration video. [video recording] (2023)
Digital Artefact
WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D. and MARTIN, K. 2023. iSee: demonstration video. [video recording]. New York: ACM [online]. Available from: https://dl.acm.org/doi/10.1145/3581754.3584137#sec-supp

This output presents a demonstration of the iSee platform. iSee is an ongoing project aimed at improving the user experience of AI by harnessing experiences and best practices in Explainable AI. To this end, iSee brings together research and developm... Read More about iSee: demonstration video. [video recording].

iSee: intelligent sharing of explanation experience of users for users. (2023)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D. and MARTIN, K. 2023. iSee: intelligent sharing of explanation experience of users for users. In IUI '23 companion: companion proceedings of the 28th Intelligent user interfaces international conference 2023 (IUI 2023), 27-31 March 2023, Sydney, Australia. New York: ACM [online], pages 79-82. Available from: https://doi.org/10.1145/3581754.3584137

The right to obtain an explanation of the decision reached by an Artificial Intelligence (AI) model is now an EU regulation. Different stakeholders of an AI system (e.g. managers, developers, auditors, etc.) may have different background knowledge, c... Read More about iSee: intelligent sharing of explanation experience of users for users..

Job assignment problem and traveling salesman problem: a linked optimisation problem. (2022)
Conference Proceeding
OGUNSEMI, A., MCCALL, J., KERN, M., LACROIX, B., CORSAR, D. and OWUSU, G. 2022. Job assignment problem and traveling salesman problem: a linked optimisation problem. In Bramer, M. and Stahl, F (eds.) Artificial intelligence XXXIX: proceedings of the 42nd SGAI (Specialist Group on Artificial Intelligence) Artificial intelligence international conference 2022 (AI 2022), 13-15 December 2022, Cambridge, UK. Lecture notes in computer science (LNCS), 13652. Cham: Springer [online], pages 19-33. Available from: https://doi.org/10.1007/978-3-031-21441-7_2

Linked decision-making in service management systems has attracted strong adoption of optimisation algorithms. However, most of these algorithms do not incorporate the complexity associated with interacting decision-making systems. This paper, theref... Read More about Job assignment problem and traveling salesman problem: a linked optimisation problem..

Clinical dialogue transcription error correction using Seq2Seq models. (2022)
Conference Proceeding
NANAYAKKARA, G., WIRATURNGA, N., CORSAR, D., MARTIN, K. and WIJEKOON, A. 2022. Clinical dialogue transcription error correction using Seq2Seq models. In Shaban-Nejad, A., Michalowski, M. and Bianco, S. (eds.) Multimodal AI in healthcare: a paradigm shift in health intelligence; selected papers from the 6th International workshop on health intelligence (W3PHIAI-22), co-located with the 34th AAAI (Association for the Advancement of Artificial Intelligence) Innovative applications of artificial intelligence (IAAI-22), 28 February - 1 March 2022, [virtual event]. Studies in computational intelligence, 1060. Cham: Springer [online], pages 41-57. Available from: https://doi.org/10.1007/978-3-031-14771-5_4

Good communication is critical to good healthcare. Clinical dialogue is a conversation between health practitioners and their patients, with the explicit goal of obtaining and sharing medical information. This information contributes to medical decis... Read More about Clinical dialogue transcription error correction using Seq2Seq models..

Adapting semantic similarity methods for case-based reasoning in the Cloud. (2022)
Conference Proceeding
NKISI-ORJI, I., PALIHAWADANA, C., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2022. Adapting semantic similarity methods for case-based reasoning in the Cloud. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 125-139. Available from: https://doi.org/10.1007/978-3-031-14923-8_9

CLOOD is a cloud-based CBR framework based on a microservices architecture, which facilitates the design and deployment of case-based reasoning applications of various sizes. This paper presents advances to the similarity module of CLOOD through the... Read More about Adapting semantic similarity methods for case-based reasoning in the Cloud..

How close is too close? Role of feature attributions in discovering counterfactual explanations. (2022)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., NKISI-ORJI, I., PALIHAWADANA, C., CORSAR, D. and MARTIN, K. 2022. How close is too close? Role of feature attributions in discovering counterfactual explanations. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 33-47. Available from: https://doi.org/10.1007/978-3-031-14923-8_3

Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, counterfactuals are "actionable" explanations that help users to understand how model decisions can be changed by adapting features of an input. A cas... Read More about How close is too close? Role of feature attributions in discovering counterfactual explanations..

Holistic, data-driven, service and supply chain optimisation: linked optimisation. (2022)
Thesis
OGUNSEMI, A. 2022. Holistic, data-driven, service and supply chain optimisation: linked optimisation. Robert Gordon University, PhD thesis. Hosted on OpenAIR [online]. Available from: https://doi.org/10.48526/rgu-wt-1987884

The intensity of competition and technological advancements in the business environment has made companies collaborate and cooperate together as a means of survival. This creates a chain of companies and business components with unified business obje... Read More about Holistic, data-driven, service and supply chain optimisation: linked optimisation..

Facility location problem and permutation flow shop scheduling problem: a linked optimisation problem. (2022)
Conference Proceeding
OGUNSEMI, A., MCCALL, J., KERN, M., LACROIX, B., CORSAR, D. and OWUSU, G. 2022. Facility location problem and permutation flow shop scheduling problem: a linked optimisation problem. In Fieldsend, J. (ed.) GECCO'22 companion: proceedings of 2022 Genetic and evolutionary computation conference companion, 9-13 July 2022, Boston, USA, [virtual event]. New York: ACM [online], pages 735-738. Available from: https://doi.org/10.1145/3520304.3529033

There is a growing literature spanning several research communities that studies multiple optimisation problems whose solutions interact, thereby leading researchers to consider suitable approaches to joint solution. Real-world problems, like supply... Read More about Facility location problem and permutation flow shop scheduling problem: a linked optimisation problem..

Clinical dialogue transcription error correction using Seq2Seq models. (2022)
Working Paper
NANAYAKKARA, G., WIRATUNGA, N., CORSAR, D., MARTIN, K. and WIJEKOON, A. 2022. Clinical dialogue transcription error correction using Seq2Seq models. arXiv [online]. Available from: https://doi.org/10.48550/arXiv.2205.13572

Good communication is critical to good healthcare. Clinical dialogue is a conversation between health practitioners and their patients, with the explicit goal of obtaining and sharing medical information. This information contributes to medical decis... Read More about Clinical dialogue transcription error correction using Seq2Seq models..

Demystifying the black box: the importance of interpretability of predictive models in neurocritical care. (2022)
Journal Article
MOSS, L., CORSAR, D., SHAW, M., PIPER, I. and HAWTHORNE, C. 2022. Demystifying the black box: the importance of interpretability of predictive models in neurocritical care. Neurocritical care [online], 37(Supplement 2): big data in neurocritical care, pages 185-191. Available from: https://doi.org/10.1007/s12028-022-01504-4

Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neuro... Read More about Demystifying the black box: the importance of interpretability of predictive models in neurocritical care..

DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. (2021)
Conference Proceeding
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. In Proceedings of 33rd IEEE (Institute of Electrical and Electronics Engineers) International conference on tools with artificial intelligence 2021 (ICTAI 2021), 1-3 November 2021, Washington, USA [virtual conference]. Piscataway: IEEE [online], pages 1466-1473. Available from: https://doi.org/10.1109/ICTAI52525.2021.00233

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies tha... Read More about DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods..

Actionable feature discovery in counterfactuals using feature relevance explainers. (2021)
Conference Proceeding
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Actionable feature discovery in counterfactuals using feature relevance explainers. In Borck, H., Eisenstadt, V., Sánchez-Ruiz, A. and Floyd, M. (eds.) ICCBR 2021 workshop proceedings (ICCBR-WS 2021): workshop proceedings for the 29th International conference on case-based reasoning co-located with the 29th International conference on case-case based reasoning (ICCBR 2021), 13-16 September 2021, Salamanca, Spain [virtual conference]. CEUR-WS proceedings, 3017. Aachen: CEUR-WS [online], pages 63-74. Available from: http://ceur-ws.org/Vol-3017/101.pdf

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machine Learning model outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to be able to reason with si... Read More about Actionable feature discovery in counterfactuals using feature relevance explainers..

Counterfactual explanations for student outcome prediction with Moodle footprints. (2021)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., NKILSI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Counterfactual explanations for student outcome prediction with Moodle footprints. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 1, pages 1-8. Available from: http://ceur-ws.org/Vol-2894/short1.pdf

Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to one that is more desirable. For this purpose a counterfactual explainer needs to be able to reason with simila... Read More about Counterfactual explanations for student outcome prediction with Moodle footprints..

Ensemble-based relationship discovery in relational databases. (2020)
Conference Proceeding
OGUNSEMI, A., MCCALL, J., KERN, M., LACROIX, B., CORSAR, D. and OWUSU, G. 2020. Ensemble-based relationship discovery in relational databases. In Bramer, M. and Ellis, R. (eds.) Artificial intelligence XXXVII: proceedings of 40th British Computer Society's Specialist Group on Artificial Intelligence (SGAI) Artificial intelligence international conference 2020 (AI-2020), 15-17 December 2020, [virtual conference]. Lecture notes in artificial intelligence, 12498. Cham: Springer [online], pages 286-300. Available from: https://doi.org/10.1007/978-3-030-63799-6_22

We performed an investigation of how several data relationship discovery algorithms can be combined to improve performance. We investigated eight relationship discovery algorithms like Cosine similarity, Soundex similarity, Name similarity, Value ran... Read More about Ensemble-based relationship discovery in relational databases..