Professor Nirmalie Wiratunga n.wiratunga@rgu.ac.uk
Associate Dean for Research
Professor Nirmalie Wiratunga n.wiratunga@rgu.ac.uk
Associate Dean for Research
Dr Anjana Wijekoon
Dr David Corsar d.corsar1@rgu.ac.uk
Senior Lecturer
Dr Kyle Martin k.martin3@rgu.ac.uk
Senior Lecturer
DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. (2021)
Presentation / Conference Contribution
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. In Proceedings of 33rd IEEE (Institute of Electrical and Electronics Engineers) International conference on tools with artificial intelligence 2021 (ICTAI 2021), 1-3 November 2021, Washington, USA [virtual conference]. Piscataway: IEEE [online], pages 1466-1473. Available from: https://doi.org/10.1109/ICTAI52525.2021.00233Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies tha... Read More about DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods..
Reasoning with counterfactual explanations for code vulnerability detection and correction. (2021)
Presentation / Conference Contribution
WIJEKOON, A. and WIRATUNGA, N. 2021. Reasoning with counterfactual explanations for code vulnerability detection and correction. In Sani, S. and Kalutarage, H. (eds.) AI and cybersecurity 2021: proceedings of the 2021 Workshop on AI and cybersecurity (AI-Cybersec 2021), co-located with the 41st Specialist Group on Artificial Intelligence international conference on artificial intelligence (SGAI 2021), 14 December 2021, [virtual event]. CEUR workshop proceedings, 3125. Aachen: CEUR-WS [online], pages 1-13. Available from: http://ceur-ws.org/Vol-3125/paper1.pdfCounterfactual explanations highlight "actionable knowledge" which helps the end-users to understand how a machine learning outcome could be changed to a more desirable outcome. In code vulnerability detection, understanding these "actionable" correc... Read More about Reasoning with counterfactual explanations for code vulnerability detection and correction..
Actionable feature discovery in counterfactuals using feature relevance explainers. (2021)
Presentation / Conference Contribution
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Actionable feature discovery in counterfactuals using feature relevance explainers. In Borck, H., Eisenstadt, V., Sánchez-Ruiz, A. and Floyd, M. (eds.) Workshop proceedings of the 29th International conference on case-based reasoning (ICCBR-WS 2021), 13-16 September 2021, [virtual event]. CEUR workshop proceedings, 3017. Aachen: CEUR-WS [online], pages 63-74. Available from: http://ceur-ws.org/Vol-3017/101.pdfCounterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machine Learning model outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to be able to reason with si... Read More about Actionable feature discovery in counterfactuals using feature relevance explainers..
Proceedings of the 2021 SICSA explainable artificial intelligence workshop (SICSA XAI 2021). (2021)
Presentation / Conference Contribution
MARTIN, K., WIRATUNGA, N. and WIJEKOON, A. (eds.) 2021. Proceedings of the 2021 SICSA explainable artificial intelligence workshop (SICSA XAI 2021), 1 June 2021, Aberdeen, UK. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online]. Available from: https://ceur-ws.org/Vol-2894/The SICSA Workshop 2021 was designed to present a forum for the dissemination of ideas on domains relating to the explainability of Artificial Intelligence and Machine Learning methods. The event was organised into several themed sessions: Session 1... Read More about Proceedings of the 2021 SICSA explainable artificial intelligence workshop (SICSA XAI 2021)..
Counterfactual explanations for student outcome prediction with Moodle footprints. (2021)
Presentation / Conference Contribution
WIJEKOON, A., WIRATUNGA, N., NKILSI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Counterfactual explanations for student outcome prediction with Moodle footprints. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1 June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 1, pages 1-8. Available from: http://ceur-ws.org/Vol-2894/short1.pdfCounterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to one that is more desirable. For this purpose a counterfactual explainer needs to be able to reason with simila... Read More about Counterfactual explanations for student outcome prediction with Moodle footprints..
About OpenAIR@RGU
Administrator e-mail: publications@rgu.ac.uk
This application uses the following open-source libraries:
Apache License Version 2.0 (http://www.apache.org/licenses/)
Apache License Version 2.0 (http://www.apache.org/licenses/)
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search