Skip to main content

Research Repository

Advanced Search

Outputs (23)

Evaluating a pass/fail grading model in first year undergraduate computing. (2023)
Conference Proceeding
ZARB, M., MCDERMOTT, R., MARTIN, K., YOUNG, T. and MCGOWAN, J. 2023. Evaluating a pass/fail grading model in first year undergraduate computing. In Proceedings of the 2023 IEEE (Institute of Electrical and Electronics Engineers) Frontiers in education conference (FIE 2023), 18-21 October 2023, College Station, TX, USA. Piscataway: IEEE [online], article 10343276. Available from: https://doi.org/10.1109/FIE58773.2023.10343276

This Innovative Practice Full Paper investigates the implications of implementing a Pass/Fail marking scheme within the undergraduate curriculum, specifically across first year computing modules in a Scottish Higher Education Institution. The motivat... Read More about Evaluating a pass/fail grading model in first year undergraduate computing..

Clinical dialogue transcription error correction with self-supervision. (2023)
Conference Proceeding
NANAYAKKARA, G., WIRATUNGA, N., CORSAR, D., MARTIN, K. and WIJEKOON, A. 2023. Clinical dialogue transcription error correction with self-supervision. In Bramer, M. and Stahl, F. (eds.) Artificial intelligence XL: proceedings of the 43rd SGAI international conference on artificial intelligence (AI-2023), 12-14 December 2023, Cambridge, UK. Lecture notes in computer science, 14381. Cham: Springer [online], pages 33-46. Available from: https://doi.org/10.1007/978-3-031-47994-6_3

A clinical dialogue is a conversation between a clinician and a patient to share medical information, which is critical in clinical decision-making. The reliance on manual note-taking is highly inefficient and leads to transcription errors when digit... Read More about Clinical dialogue transcription error correction with self-supervision..

CBR driven interactive explainable AI. (2023)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., MARTIN, K., CORSAR, D., NKISI-ORJI, I., PALIHAWADANA, C., BRIDGE, D., PRADEEP, P., AGUDO, B.D. and CARO-MARTÍNEZ, M. 2023. CBR driven interactive explainable AI. In MASSIE, S. and CHAKRABORTI, S. (eds.) 2023. Case-based reasoning research and development: proceedings of the 31st International conference on case-based reasoning 2023, (ICCBR 2023), 17-20 July 2023, Aberdeen, UK. Lecture notes in computer science (LNCS), 14141. Cham: Springer [online], pages169-184. Available from: https://doi.org/10.1007/978-3-031-40177-0_11

Explainable AI (XAI) can greatly enhance user trust and satisfaction in AI-assisted decision-making processes. Numerous explanation techniques (explainers) exist in the literature, and recent findings suggest that addressing multiple user needs requi... Read More about CBR driven interactive explainable AI..

Machine learning for risk stratification of diabetic foot ulcers using biomarkers. (2023)
Conference Proceeding
MARTIN, K., UPHADYAY, A., WIJEKOON, A., WIRATUNGA, N. and MASSIE, S. [2023]. Machine learning for risk stratification of diabetic foot ulcers using biomarkers. To be presented at the 2023 International conference on computational science (ICCS 2023): computing at the cutting edge of science, 3-5 July 2023, Prague, Czech Republic: [virtual event].

Development of a Diabetic Foot Ulcer (DFU) causes a sharp decline in a patient's health and quality of life. The process of risk stratification is crucial for informing the care that a patient should receive to help manage their Diabetes before an ul... Read More about Machine learning for risk stratification of diabetic foot ulcers using biomarkers..

iSee: intelligent sharing of explanation experiences. (2023)
Conference Proceeding
MARTIN, K., WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D., DÍAZ-AGUDO, B., RECIO-GARCÍA, J.A., CARO-MARTÍNEZ, M., BRIDGE, D., PRADEEP, P., LIRET, A. and FLEISCH, B. 2022. iSee: intelligent sharing of explanation experiences. In Reuss, P. and Schönborn, J. (eds.) Workshop proceedings of the 30th International conference on case-based reasoning (ICCBR-WS 2022), 12-15 September 2022, Nancy, France. CEUR workshop proceedings, 3389. Aachen: CEUR-WS [online], pages 231-232. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_Workshop_paper_83.pdf

The right to an explanation of the decision reached by a machine learning (ML) model is now an EU regulation. However, different system stakeholders may have different background knowledge, competencies and goals, thus requiring different kinds of ex... Read More about iSee: intelligent sharing of explanation experiences..

iSee: intelligent sharing of explanation experience of users for users. (2023)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., PALIHAWADANA, C., NKISI-ORJI, I., CORSAR, D. and MARTIN, K. 2023. iSee: intelligent sharing of explanation experience of users for users. In IUI '23 companion: companion proceedings of the 28th Intelligent user interfaces international conference 2023 (IUI 2023), 27-31 March 2023, Sydney, Australia. New York: ACM [online], pages 79-82. Available from: https://doi.org/10.1145/3581754.3584137

The right to obtain an explanation of the decision reached by an Artificial Intelligence (AI) model is now an EU regulation. Different stakeholders of an AI system (e.g. managers, developers, auditors, etc.) may have different background knowledge, c... Read More about iSee: intelligent sharing of explanation experience of users for users..

Empowering inquiry-based learning in short courses for professional students. (2023)
Conference Proceeding
MARTIN, K., ZARB, M., MCDERMOTT, R. and YOUNG, T. 2023. Empowering inquiry-based learning in short courses for professional students. In Chova, L.G., Martínez, C.G. and Lees, J. (eds.) Proceedings of the 17th International technology, education and development conference 2023 (INTED 2023), 6-8 March 2023, Valencia, Spain. Valencia: IATED [online], pages 5404-5409. Available from: https://doi.org/10.21125/inted.2023.1407

This paper presents the pedagogic underpinning for the development of an online postgraduate short course educating participants on multi-modal data science, specifically within the context of the digital health industry. The growing digital health s... Read More about Empowering inquiry-based learning in short courses for professional students..

Clinical dialogue transcription error correction using Seq2Seq models. (2022)
Conference Proceeding
NANAYAKKARA, G., WIRATURNGA, N., CORSAR, D., MARTIN, K. and WIJEKOON, A. 2022. Clinical dialogue transcription error correction using Seq2Seq models. In Shaban-Nejad, A., Michalowski, M. and Bianco, S. (eds.) Multimodal AI in healthcare: a paradigm shift in health intelligence; selected papers from the 6th International workshop on health intelligence (W3PHIAI-22), co-located with the 34th AAAI (Association for the Advancement of Artificial Intelligence) Innovative applications of artificial intelligence (IAAI-22), 28 February - 1 March 2022, [virtual event]. Studies in computational intelligence, 1060. Cham: Springer [online], pages 41-57. Available from: https://doi.org/10.1007/978-3-031-14771-5_4

Good communication is critical to good healthcare. Clinical dialogue is a conversation between health practitioners and their patients, with the explicit goal of obtaining and sharing medical information. This information contributes to medical decis... Read More about Clinical dialogue transcription error correction using Seq2Seq models..

How close is too close? Role of feature attributions in discovering counterfactual explanations. (2022)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., NKISI-ORJI, I., PALIHAWADANA, C., CORSAR, D. and MARTIN, K. 2022. How close is too close? Role of feature attributions in discovering counterfactual explanations. In Keane, M.T. and Wiratunga, N. (eds.) Case-based reasoning research and development: proceedings of the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. Lecture notes in computer science, 13405. Cham: Springer [online], pages 33-47. Available from: https://doi.org/10.1007/978-3-031-14923-8_3

Counterfactual explanations describe how an outcome can be changed to a more desirable one. In XAI, counterfactuals are "actionable" explanations that help users to understand how model decisions can be changed by adapting features of an input. A cas... Read More about How close is too close? Role of feature attributions in discovering counterfactual explanations..

DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. (2021)
Conference Proceeding
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods. In Proceedings of 33rd IEEE (Institute of Electrical and Electronics Engineers) International conference on tools with artificial intelligence 2021 (ICTAI 2021), 1-3 November 2021, Washington, USA [virtual conference]. Piscataway: IEEE [online], pages 1466-1473. Available from: https://doi.org/10.1109/ICTAI52525.2021.00233

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a machine learning outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to discover input dependencies tha... Read More about DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods..

Actionable feature discovery in counterfactuals using feature relevance explainers. (2021)
Conference Proceeding
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Actionable feature discovery in counterfactuals using feature relevance explainers. In Borck, H., Eisenstadt, V., Sánchez-Ruiz, A. and Floyd, M. (eds.) Workshop proceedings of the 29th International conference on case-based reasoning (ICCBR-WS 2021), 13-16 September 2021, [virtual event]. CEUR workshop proceedings, 3017. Aachen: CEUR-WS [online], pages 63-74. Available from: http://ceur-ws.org/Vol-3017/101.pdf

Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machine Learning model outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to be able to reason with si... Read More about Actionable feature discovery in counterfactuals using feature relevance explainers..

Proceedings of the 2021 SICSA explainable artificial intelligence workshop (SICSA XAI 2021) (2021)
Conference Proceeding
MARTIN, K., WIRATUNGA, N. and WIJEKOON, A. (eds.) 2021. Proceedings of the 2021 SICSA explainable artificial intelligence workshop (SICSA XAI 2021), 1 June 2021, Aberdeen, UK. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online]. Available from: https://ceur-ws.org/Vol-2894/

The SICSA Workshop 2021 was designed to present a forum for the dissemination of ideas on domains relating to the explainability of Artificial Intelligence and Machine Learning methods. The event was organised into several themed sessions: Session 1... Read More about Proceedings of the 2021 SICSA explainable artificial intelligence workshop (SICSA XAI 2021).

Non-deterministic solvers and explainable AI through trajectory mining. (2021)
Conference Proceeding
FYVIE, M., MCCALL, J.A.W. and CHRISTIE, L.A. 2021. Non-deterministic solvers and explainable AI through trajectory mining. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 4, pages 75-78. Available from: http://ceur-ws.org/Vol-2894/poster2.pdf

Traditional methods of creating explanations from complex systems involving the use of AI have resulted in a wide variety of tools available to users to generate explanations regarding algorithm and network designs. This however has traditionally bee... Read More about Non-deterministic solvers and explainable AI through trajectory mining..

Counterfactual explanations for student outcome prediction with Moodle footprints. (2021)
Conference Proceeding
WIJEKOON, A., WIRATUNGA, N., NKILSI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Counterfactual explanations for student outcome prediction with Moodle footprints. In Martin, K., Wiratunga, N. and Wijekoon, A. (eds.) SICSA XAI workshop 2021: proceedings of 2021 SICSA (Scottish Informatics and Computer Science Alliance) eXplainable artificial intelligence workshop (SICSA XAI 2021), 1st June 2021, [virtual conference]. CEUR workshop proceedings, 2894. Aachen: CEUR-WS [online], session 1, pages 1-8. Available from: http://ceur-ws.org/Vol-2894/short1.pdf

Counterfactual explanations focus on “actionable knowledge” to help end-users understand how a machine learning outcome could be changed to one that is more desirable. For this purpose a counterfactual explainer needs to be able to reason with simila... Read More about Counterfactual explanations for student outcome prediction with Moodle footprints..

Assessing the clinicians’ pathway to embed artificial intelligence for assisted diagnostics of fracture detection. (2020)
Conference Proceeding
MORENO-GARCÍA, C.F., DANG, T., MARTIN, K., PATEL, M., THOMPSON, A., LEISHMAN, L. and WIRATUNGA, N. 2020. Assessing the clinicians’ pathway to embed artificial intelligence for assisted diagnostics of fracture detection. In Bach, K., Bunescu, R., Marling, C. and Wiratunga, N. (eds.) Knowledge discovery in healthcare data 2020: proceedings of the 5th Knowledge discovery in healthcare data international workshop 2020 (KDH 2020), co-located with 24th European Artificial intelligence conference (ECAI 2020), 29-30 August 2020, [virtual conference]. CEUR workshop proceedings, 2675. Aachen: CEUR-WS [online], pages 63-70. Available from: http://ceur-ws.org/Vol-2675/paper10.pdf

Fracture detection has been a long-standingparadigm on the medical imaging community. Many algo-rithms and systems have been presented to accurately detectand classify images in terms of the presence and absence offractures in different parts of the... Read More about Assessing the clinicians’ pathway to embed artificial intelligence for assisted diagnostics of fracture detection..

Locality sensitive batch selection for triplet networks. (2020)
Conference Proceeding
MARTIN, K., WIRATUNGA, N. and SANI, S. 2020. Locality sensitive batch selection for triplet networks. In Proceedings of the 2020 Institute of Electrical and Electronics Engineers (IEEE) International joint conference on neural networks (IEEE IJCNN 2020), part of the 2020 IEEE World congress on computational intelligence (IEEE WCCI 2020) and co-located with the 2020 IEEE congress on evolutionary computation (IEEE CEC 2020) and the 2020 IEEE International fuzzy systems conference (FUZZ-IEEE 2020), 19-24 July 2020, [virtual conference]. Piscataway: IEEE [online], article ID 9207538. Available from: https://doi.org/10.1109/IJCNN48605.2020.9207538

Triplet networks are deep metric learners which learn to optimise a feature space using similarity knowledge gained from training on triplets of data simultaneously. The architecture relies on the triplet loss function to optimise its weights based u... Read More about Locality sensitive batch selection for triplet networks..

Preface: case-based reasoning and deep learning. (2020)
Conference Proceeding
MARTIN, K., KAPETANAKIS, S., WIJEKOON, A., AMIN, K. and MASSIE, S. 2019. Preface: case-based reasoning and deep learning. In Kapetanakis, S. and Borck, H. (eds.) Proceedings of the 27th International conference on case-based reasoning workshop (ICCBR-WS19), co-located with the 27th International conference on case-based reasoning (ICCBR19), 8-12 September 2019, Otzenhausen, Germany. CEUR workshop proceedings, 2567. Aachen: CEUR-WS [online], pages 6-7. Available from: http://ceur-ws.org/Vol-2567/cbr_dl_preface.pdf

Recent advances in deep learning (DL) have helped to usher in a new wave of confidence in the capability of artificial intelligence. Increasingly, we are seeing DL architectures out perform long established state-of-the-art algorithms in a numb... Read More about Preface: case-based reasoning and deep learning..

Human activity recognition with deep metric learners. (2020)
Conference Proceeding
MARTIN, K., WIJEKOON, A. and WIRATUNGA, N. 2019. Human activity recognition with deep metric learners. In Kapetanakis, S. and Borck, H. (eds.) Proceedings of the 27th International conference on case-based reasoning workshop (ICCBR-WS19), co-located with the 27th International conference on case-based reasoning (ICCBR19), 8-12 September 2019, Otzenhausen, Germany. CEUR workshop proceedings, 2567. Aachen: CEUR-WS [online], pages 8-17. Available from: http://ceur-ws.org/Vol-2567/paper1.pdf

Establishing a strong foundation for similarity-based return is a top priority in Case-Based Reasoning (CBR) systems. Deep Metric Learners (DMLs) are a group of neural network architectures which learn to optimise case representations for similarity-... Read More about Human activity recognition with deep metric learners..

Developing a catalogue of explainability methods to support expert and non-expert users. (2019)
Conference Proceeding
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2019. Developing a catalogue of explainability methods to support expert and non-expert users. In Bramer, M. and Petridis, M. (eds.) Artificial intelligence XXXVI: proceedings of the 39th British Computer Society's Specialist Group on Artificial Intelligence (SGAI) international Artificial intelligence conference 2019 (AI 2019), 17-19 December 2019, Cambridge, UK. Lecture notes in computer science, 11927. Cham: Springer [online], pages 309-324. Available from: https://doi.org/10.1007/978-3-030-34885-4_24

Organisations face growing legal requirements and ethical responsibilities to ensure that decisions made by their intelligent systems are explainable. However, provisioning of an explanation is often application dependent, causing an extended design... Read More about Developing a catalogue of explainability methods to support expert and non-expert users..

GramError: a quality metric for machine generated songs. (2018)
Conference Proceeding
DAVIES, C., WIRATUNGA, N. and MARTIN, K. 2018. GramError: a quality metric for machine generated songs. In Bramer, M. and Petridis, M. (eds.) Artificial intelligence XXXV: proceedings of the 38th British Computer Society's Specialist Group on Artificial Intelligence (SGAI) International conference on innovative techniques and applications of artificial intelligence (AI-2018), 11-13 December 2018, Cambridge, UK. Lecture notes in computer science, 11311. Cham: Springer [online], pages 184-190. Available from: https://doi.org/10.1007/978-3-030-04191-5_16

This paper explores whether a simple grammar-based metric can accurately predict human opinion of machine-generated song lyrics quality. The proposed metric considers the percentage of words written in natural English and the number of grammatical er... Read More about GramError: a quality metric for machine generated songs..

Risk information recommendation for engineering workers. (2018)
Conference Proceeding
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2018. Risk information recommendation for engineering workers. In Bramer, M. and Petridis, M. (eds.) Artificial intelligence XXXV: proceedings of the 38th British Computer Society's Specialist Group on Artificial Intelligence (SGAI) International conference on innovative techniques and applications of artificial intelligence (AI-2018), 11-13 December 2018, Cambridge, UK. Lecture notes in computer science, 11311. Cham: Springer [online], pages 311-325. Available from: https://doi.org/10.1007/978-3-030-04191-5_27

Within any sufficiently expertise-reliant and work-driven domain there is a requirement to understand the similarities between specific work tasks. Though mechanisms to develop similarity models for these areas do exist, in practice they have been cr... Read More about Risk information recommendation for engineering workers..

Informed pair selection for self-paced metric learning in Siamese neural networks. (2018)
Conference Proceeding
MARTIN, K., WIRATUNGA, N., MASSIE, S. and CLOS, J. 2018. Informed pair selection for self-paced metric learning in Siamese neural networks. In Bramer, M. and Petridis, M. (eds.) Artificial intelligence XXXV: proceedings of the 38th British Computer Society's Specialist Group on Artificial Intelligence (SGAI) International conference on innovative techniques and applications of artificial intelligence (AI-2018), 11-13 December 2018, Cambridge, UK. Lecture notes in computer science, 11311. Cham: Springer [online], pages 34-49. Available from: https://doi.org/10.1007/978-3-030-04191-5_3

Siamese Neural Networks (SNNs) are deep metric learners that use paired instance comparisons to learn similarity. The neural feature maps learnt in this way provide useful representations for classification tasks. Learning in SNNs is not reliant on e... Read More about Informed pair selection for self-paced metric learning in Siamese neural networks..

Digital interpretation of sensor-equipment diagrams. (2018)
Conference Proceeding
MORENO-GARCÍA, C.F. 2018. Digital interpretation of sensor-equipment diagrams. In Martin, K., Wiratunga, N. and Smith, L.S. (eds.) Proceedings of the 2018 Scottish Informatics and Computer Science Alliance (SCISA) workshop on reasoning, learning and explainability (ReaLX 2018), 27 June 2018, Aberdeen, UK. CEUR workshop proceedings, 2151. Aachen: CEUR-WS [online], session 2, paper 1. Available from: http://ceur-ws.org/Vol-2151/Paper_s2.pdf

A sensor-equipment diagram is a type of engineering drawing used in the industrial practice that depicts the interconnectivity between a group of sensors and a portion of an Oil & Gas facility. The interpretation of these documents is not a straightf... Read More about Digital interpretation of sensor-equipment diagrams..