Skip to main content

Research Repository

Advanced Search

Dr Kyle Martin's Outputs (3)

iSee: a case-based reasoning platform for the design of explanation experiences. (2024)
Journal Article
CARO-MARTÍNEZ, M., RECIO-GARCÍA, J.A., DÍAZ-AGUDO, B., DARIAS, J.M., WIRATUNGA, N., MARTIN, K., WIJEKOON, A., NKISI-ORJI, I., CORSAR, D., PRADEEP, P., BRIDGE, D. and LIRET, A. 2024. iSee: a case-based reasoning platform for the design of explanation experiences. Knowledge-based systems [online], 302, article number 112305. Available from: https://doi.org/10.1016/j.knosys.2024.112305

Explainable Artificial Intelligence (XAI) is an emerging field within Artificial Intelligence (AI) that has provided many methods that enable humans to understand and interpret the outcomes of AI systems. However, deciding on the best explanation app... Read More about iSee: a case-based reasoning platform for the design of explanation experiences..

Computer vision and machine learning for medical image analysis: recent advances, challenges, and way forward. (2022)
Journal Article
ELYAN, E., VUTTIPITTAYAMONGKOL, P., JOHNSTON, P., MARTIN, K., MCPHERSON, K., MORENO-GARCIA, C.F., JAYNE, C. and SARKER, M.M.K. 2022. Computer vision and machine learning for medical image analysis: recent advances, challenges, and way forward. Artificial intelligence surgery [online], 2, pages 24-25. Available from: https://doi.org/10.20517/ais.2021.15

The recent development in the areas of deep learning and deep convolutional neural networks has significantly progressed and advanced the field of computer vision (CV) and image analysis and understanding. Complex tasks such as classifying and segmen... Read More about Computer vision and machine learning for medical image analysis: recent advances, challenges, and way forward..

Evaluating explainability methods intended for multiple stakeholders. (2021)
Journal Article
MARTIN, K., LIRET, A., WIRATUNGA, N., OWUSU, G. and KERN, M. 2021. Evaluating explainability methods intended for multiple stakeholders. KI - Künstliche Intelligenz [online], 35(3-4), pages 397-411. Available from: https://doi.org/10.1007/s13218-020-00702-6

Explanation mechanisms for intelligent systems are typically designed to respond to specific user needs, yet in practice these systems tend to have a wide variety of users. This can present a challenge to organisations looking to satisfy the explanat... Read More about Evaluating explainability methods intended for multiple stakeholders..