Skip to main content

Research Repository

Advanced Search

All Outputs (15)

Mitigating gradient inversion attacks in federated learning with frequency transformation. (2024)
Presentation / Conference Contribution
PALIHAWADANA, C., WIRATUNGA, N., KALUTARAGE, H. and WIJEKOON, A. 2024. Mitigating gradient inversion attacks in federated learning with frequency transformation. In Katsikas, S. et al. (eds.) Computer security: revised selected papers from the proceedings of the International workshops of the 28th European symposium on research in computer security (ESORICS 2023 International Workshops), 25-29 September 2023, The Hague, Netherlands. Lecture notes in computer science, 14399. Cham: Springer [online], part II, pages 750-760. Available from: https://doi.org/10.1007/978-3-031-54129-2_44

Centralised machine learning approaches have raised concerns regarding the privacy of client data. To address this issue, privacy-preserving techniques such as Federated Learning (FL) have emerged, where only updated gradients are communicated instea... Read More about Mitigating gradient inversion attacks in federated learning with frequency transformation..

CBR driven interactive explainable AI. (2023)
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., MARTIN, K., CORSAR, D., NKISI-ORJI, I., PALIHAWADANA, C., BRIDGE, D., PRADEEP, P., AGUDO, B.D. and CARO-MARTÍNEZ, M. 2023. CBR driven interactive explainable AI. In MASSIE, S. and CHAKRABORTI, S. (eds.) 2023. Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023, (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages169-184. Available from: https://doi.org/10.1007/978-3-031-40177-0_11

Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requi... Read More about CBR driven interactive explainable AI..

Failure-driven transformational case reuse of explanation strategies in CloodCBR. (2023)
Presentation / Conference Contribution
NKISI-ORJI, I., PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and CORSAR, D. 2023. Failure-driven transformational case reuse of explanation strategies in CloodCBR. In Massie, S. and Chakraborti, S. (eds.) Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023 (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages 279-293. Available from: https://doi.org/10.1007/978-3-031-40177-0_18

In this paper, we propose a novel approach to improve problem-solving efficiency through the reuse of case solutions. Specifically, we introduce the concept of failure-driven transformational case reuse of explanation strategies, which involves trans... Read More about Failure-driven transformational case reuse of explanation strategies in CloodCBR..

Explainable weather forecasts through an LSTM-CBR twin system. (2023)
Presentation / Conference Contribution
PIRIE, C., SURESH, M., SALIMI, P., PALIHAWADANA, C. and NANAYAKKARA, G. 2022. Explainable weather forecasts through an LSTM-CBR twin system. In Reuss, P. and Schönborn, J. (eds.) Workshop proceedings of the 30th International conference on case-based reasoning (ICCBR-WS 2022), 12-15 September 2022, Nancy, France. CEUR workshop proceedings, 3389. Aachen: CEUR-WS [online], pages 256-260. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_XCBR_Challenge_RGU.pdf

In this paper, we explore two methods for explaining LSTM-based temperature forecasts using previous 14 day progressions of humidity and pressure. First, we propose and evaluate an LSTM-CBR twin system that generates nearest-neighbors that can be vis... Read More about Explainable weather forecasts through an LSTM-CBR twin system..

Introducing Clood CBR: a cloud based CBR framework. (2023)
Presentation / Conference Contribution
PALIHAWADANA, C., NKISI-ORJI, I., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2022. Introducing Clood CBR: a cloud based CBR framework. In Reuss, P. and Schönborn, J. (eds.) Workshop proceedings of the 30th International conference on case-based reasoning (ICCBR-WS 2022), 12-15 September 2022, Nancy, France. CEUR workshop proceedings, 3389. Aachen: CEUR-WS [online], pages 233-234. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_108.pdf

CBR applications have been deployed in a wide range of sectors, from pharmaceuticals; to defence and aerospace to IoT and transportation, to poetry and music generation; for example. However, a majority of applications have been built using monolithi... Read More about Introducing Clood CBR: a cloud based CBR framework..

iSee: intelligent sharing of explanation experiences. (2023)
Presentation / Conference Contribution
MARTIN, K., WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D., DÍAZ-AGUDO, B., RECIO-GARCÍA, J.A., CARO-MARTÍNEZ, M., BRIDGE, D., PRADEEP, P., LIRET, A. and FLEISCH, B. 2022. iSee: intelligent sharing of explanation experiences. In Reuss, P. and Schönborn, J. (eds.) Workshop proceedings of the 30th International conference on case-based reasoning (ICCBR-WS 2022), 12-15 September 2022, Nancy, France. CEUR workshop proceedings, 3389. Aachen: CEUR-WS [online], pages 231-232. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_83.pdf

The right to an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. However, different system stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of ex... Read More about iSee: intelligent sharing of explanation experiences..

iSee: intelligent sharing of explanation experience of users for users. (2023)
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D. and MARTIN, K. 2023. iSee: intelligent sharing of explanation experience of users for users. In IUI '23 companion: companion proceedings of the 28th Intelligent user interfaces international conference 2023 (IUI 2023), 27-31 March 2023, Sydney, Australia. New York: ACM [online], pages 79-82. Available from: https://doi.org/10.1145/3581754.3584137

The right to obtain an explanation of the decision reached by an Artificial Intelligence (AI) model is now an EU regulation. Different stakeholders of an AI system (e.g. managers, developers, auditors, etc.) may have different background knowledge, c... Read More about iSee: intelligent sharing of explanation experience of users for users..

iSee: demonstration video. [video recording] (2023)
Digital Artefact
WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D. and MARTIN, K. 2023. iSee: demonstration video. [video recording]. New York: ACM [online]. Available from: https://dl.acm.org/doi/10.1145/3581754.3584137#sec-supp

This output presents a demonstration of the iSee platform. iSee is an ongoing project aimed at improving the user experience of AI by harnessing experiences and best practices in Explainable AI. To this end, iSee brings together research and developm... Read More about iSee: demonstration video. [video recording].

Adapting semantic similarity methods for case-based reasoning in the Cloud. (2022)
Presentation / Conference Contribution
NKISI-ORJI, I., PALIHAWADANA, C., WIRATUNGA, N., CORSAR, D. and WIJEKOON, A. 2022. Adapting semantic similarity methods for case-based reasoning in the Cloud. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 125-139. Available from: https://doi.org/10.1007/978-3-031-14923-8_9

CLOOD is a cloud-based CBR framework based on a microservices architecture, which facilitates the design and deployment of case-based reasoning applications of various sizes. This paper presents advances to the similarity module of CLOOD through the... Read More about Adapting semantic similarity methods for case-based reasoning in the Cloud..

How close is too close? Role of feature attributions in discovering counterfactual explanations. (2022)
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., NKISI-ORJI, I., PALIHAWADANA, C., CORSAR, D. and MARTIN, K. 2022. How close is too close? Role of feature attributions in discovering counterfactual explanations. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 33-47. Available from: https://doi.org/10.1007/978-3-031-14923-8_3

Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, counterfactuals are "actionable" explanations that help users to understand how model decisions can be changed by adapting features of an input. A cas... Read More about How close is too close? Role of feature attributions in discovering counterfactual explanations..

DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. (2021)
Presentation / Conference Contribution
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. In Proceedings of 33rd IEEE (Institute of Electrical and Electronics Engineers) International conference on tools with artificial intelligence 2021 (ICTAI 2021), 1-3 November 2021, Washington, USA [virtual conference]. Piscataway: IEEE [online], pages 1466-1473. Available from: https://doi.org/10.1109/ICTAI52525.2021.00233

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies tha... Read More about DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods..

FedSim: similarity guided model aggregation for federated learning. (2021)
Journal Article
PALIHAWADANA, C., WIRATUNGA, N., WIJEKOON, A. and KALUTARAGE, H. 2022. FedSim: similarity guided model aggregation for federated learning. Neurocomputing [online], 483: distributed machine learning, optimization and applications, pages 432-445. Available from: https://doi.org/10.1016/j.neucom.2021.08.141

Federated Learning (FL) is a distributed machine learning approach in which clients contribute to learning a global model in a privacy preserved manner. Effective aggregation of client models is essential to create a generalised global model. To what... Read More about FedSim: similarity guided model aggregation for federated learning..

Actionable feature discovery in counterfactuals using feature relevance explainers. (2021)
Presentation / Conference Contribution
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Actionable feature discovery in counterfactuals using feature relevance explainers. In Borck, H., Eisenstadt, V., Sánchez-Ruiz, A. and Floyd, M. (eds.) Workshop proceedings of the 29th International conference on case-based reasoning (ICCBR-WS 2021), 13-16 September 2021, [virtual event]. CEUR workshop proceedings, 3017. Aachen: CEUR-WS [online], pages 63-74. Available from: http://ceur-ws.org/Vol-3017/101.pdf

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machine Learning model outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to be able to reason with si... Read More about Actionable feature discovery in counterfactuals using feature relevance explainers..

Counterfactual explanations for student outcome prediction with Moodle footprints. (2021)
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., NKILSI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Counterfactual explanations for student outcome prediction with Moodle footprints. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 1, pages 1-8. Available from: http://ceur-ws.org/Vol-2894/short1.pdf

Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to one that is more desirable. For this purpose a counterfactual explainer needs to be able to reason with simila... Read More about Counterfactual explanations for student outcome prediction with Moodle footprints..

Clood CBR: towards microservices oriented case-based reasoning. (2020)
Presentation / Conference Contribution
NKISI-ORJI, I., WIRATUNGA, N., PALIHAWADANA, C., RECIO-GARCIA, J.A. and CORSAR, D. 2020. Clood CBR: towards microservices oriented case-based reasoning. In Watson, I and Weber, R. (eds.) Case-based reasoning research and development: proceedings of the 28th International conference on case-based reasoning research and development (ICCBR 2020), 8-12 June 2020, Salamanca, Spain [virtual conference]. Lecture notes in computer science, 12311. Cham: Springer [online], pages 129-143. Available from: https://doi.org/10.1007/978-3-030-58342-2_9

CBR applications have been deployed in a wide range of sectors, from pharmaceuticals; to defence and aerospace to IoT and transportation, to poetry and music generation; for example. However, a majority of these have been built using monolithic archi... Read More about Clood CBR: towards microservices oriented case-based reasoning..