Skip to main content

Research Repository

Advanced Search

Dr Anjana Wijekoon


Adapting semantic similarity methods for case-based reasoning in the Cloud. (2022)
Conference Proceeding
NKISI-ORJI, I., PALIHAWADANA, C., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2022. Adapting semantic similarity methods for case-based reasoning in the Cloud. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 125-139. Available from: https://doi.org/10.1007/978-3-031-14923-8_9

CLOOD is a cloud-based CBR framework based on a microservices architecture, which facilitates the design and deployment of case-based reasoning applications of various sizes. This paper presents advances to the similarity module of CLOOD through the... Read More about Adapting semantic similarity methods for case-based reasoning in the Cloud..

How close is too close? Role of feature attributions in discovering counterfactual explanations. (2022)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., NKISI-ORJI, I., PALIHAWADANA, C., CORSAR, D. and MARTIN, K. 2022. How close is too close? Role of feature attributions in discovering counterfactual explanations. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 33-47. Available from: https://doi.org/10.1007/978-3-031-14923-8_3

Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, counterfactuals are "actionable" explanations that help users to understand how model decisions can be changed by adapting features of an input. A cas... Read More about How close is too close? Role of feature attributions in discovering counterfactual explanations..

DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. (2021)
Conference Proceeding
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. In Proceedings of 33rd IEEE (Institute of Electrical and Electronics Engineers) International conference on tools with artificial intelligence 2021 (ICTAI 2021), 1-3 November 2021, Washington, USA [virtual conference]. Piscataway: IEEE [online], pages 1466-1473. Available from: https://doi.org/10.1109/ICTAI52525.2021.00233

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies tha... Read More about DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods..

Reasoning with counterfactual explanations for code vulnerability detection and correction. (2021)
Conference Proceeding
WIJEKOON, A. and WIRATUNGA, N. 2021. Reasoning with counterfactual explanations for code vulnerability detection and correction. In Sani, S. and Kalutarage, H. (eds.) AI and cybersecurity 2021 (AI-Cybersec 2021): proceedings of the workshop on AI and cybersecurity (AI-Cybersec 2021) co-located with 41st (British Computer Society's Specialist Group on Artificial Intelligence) SGAI international conference on artificial intelligence (SGAI 2021), 14 December 2021, Cambridge, UK: [virtual conference]. Aachen: CEUR Workshop Proceedings [online], 3125, pages 1-13. Available from: http://ceur-ws.org/Vol-3125/paper1.pdf 14 December 2021, Cambridge, UK: [virtual event]. Aachen: CEUR Workshop Proceedings [online], 3125, pages 1-13. Available from: http://ceur-ws.org/Vol-3125/paper1.pdf

Counterfactual explanations highlight "actionable knowledge" which helps the end-users to understand how a machine learning outcome could be changed to a more desirable outcome. In code vulnerability detection, understanding these "actionable" correc... Read More about Reasoning with counterfactual explanations for code vulnerability detection and correction..

FedSim: similarity guided model aggregation for federated learning. (2021)
Journal Article
PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and KALUTARAGE, H. 2022. FedSim: similarity guided model aggregation for federated learning. Neurocomputing [online], 483: distributed machine learning, optimization and applications, pages 432-445. Available from: https://doi.org/10.1016/j.neucom.2021.08.141

Federated Learning (FL) is a distributed machine learning approach in which clients contribute to learning a global model in a privacy preserved manner. Effective aggregation of client models is essential to create a generalised global model. To what... Read More about FedSim: similarity guided model aggregation for federated learning..

Actionable feature discovery in counterfactuals using feature relevance explainers. (2021)
Conference Proceeding
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Actionable feature discovery in counterfactuals using feature relevance explainers. In Borck, H., Eisenstadt, V., Sánchez-Ruiz, A. and Floyd, M. (eds.) ICCBR 2021 workshop proceedings (ICCBR-WS 2021): workshop proceedings for the 29th International conference on case-based reasoning co-located with the 29th International conference on case-case based reasoning (ICCBR 2021), 13-16 September 2021, Salamanca, Spain [virtual conference]. CEUR-WS proceedings, 3017. Aachen: CEUR-WS [online], pages 63-74. Available from: http://ceur-ws.org/Vol-3017/101.pdf

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machine Learning model outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to be able to reason with si... Read More about Actionable feature discovery in counterfactuals using feature relevance explainers..

Counterfactual explanations for student outcome prediction with Moodle footprints. (2021)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., NKILSI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Counterfactual explanations for student outcome prediction with Moodle footprints. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 1, pages 1-8. Available from: http://ceur-ws.org/Vol-2894/short1.pdf

Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to one that is more desirable. For this purpose a counterfactual explainer needs to be able to reason with simila... Read More about Counterfactual explanations for student outcome prediction with Moodle footprints..

Non-deterministic solvers and explainable AI through trajectory mining. (2021)
Conference Proceeding
FYVIE, M., MCCALL, J.A.W. and CHRISTIE, L.A. 2021. Non-deterministic solvers and explainable AI through trajectory mining. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 4, pages 75-78. Available from: http://ceur-ws.org/Vol-2894/poster2.pdf

Traditional methods of creating explanations from complex systems involving the use of AI have resulted in a wide variety of tools available to users to generate explanations regarding algorithm and network designs. This however has traditionally bee... Read More about Non-deterministic solvers and explainable AI through trajectory mining..

Personalised exercise recognition towards improved self-management of musculoskeletal disorders. (2021)
Thesis
WIJEKOON, A. 2021. Personalised exercise recognition towards improved self-management of musculoskeletal disorders. Robert Gordon University, PhD thesis. Hosted on OpenAIR [online]. Available from: https://doi.org/10.48526/rgu-wt-1358224

Musculoskeletal Disorders (MSD) have been the primary contributor to the global disease burden, with increased years lived with disability. Such chronic conditions require self-management, typically in the form of maintaining an active lifestyle whil... Read More about Personalised exercise recognition towards improved self-management of musculoskeletal disorders..

Personalised meta-learning for human activity recognition with few-data. (2020)
Conference Proceeding
WIJEKOON, A. and WIRATUNGA, N. 2020. Personalised meta-learning for human activity recognition with few-data. In Bramer, M. and Ellis, R. (eds.) Artificial intelligence XXXVII: proceedings of 40th British Computer Society's Specialist Group on Artificial Intelligence (SGAI) Artificial intelligence international conference 2020 (AI-2020), 15-17 December 2020, [virtual conference]. Lecture notes in artificial intelligence, 12498. Cham: Springer [online], pages 79-93. Available from: https://doi.org/10.1007/978-3-030-63799-6_6

State-of-the-art methods of Human Activity Recognition(HAR) rely on a considerable amount of labelled data to train deep architectures. This becomes prohibitive when tasked with creating models that are sensitive to personal nuances in human movement... Read More about Personalised meta-learning for human activity recognition with few-data..