Laura Moss
Demystifying the black box: the importance of interpretability of predictive models in neurocritical care.
Moss, Laura; Corsar, David; Shaw, Martin; Piper, Ian; Hawthorne, Christopher
Authors
Abstract
Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neurocritical care patient data to develop models with high levels of predictive accuracy. However, although these recent models appear clinically promising, their interpretability has often not been considered and they tend to be black box models, making it extremely difficult to understand how the model came to its conclusion. Interpretable machine learning methods have the potential to provide the means to overcome some of these issues but are largely unexplored within the neurocritical care domain. This article examines existing models used in neurocritical care from the perspective of interpretability. Further, the use of interpretable machine learning will be explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neurocritical care data. Finding a solution to the lack of model explanation, transparency, and accountability is important because these issues have the potential to contribute to model trust and clinical acceptance, and, increasingly, regulation is stipulating a right to explanation for decisions made by models and algorithms. To ensure that the prospective gains from sophisticated predictive models to neurocritical care provision can be realized, it is imperative that interpretability of these models is fully considered.
Citation
MOSS, L., CORSAR, D., SHAW, M., PIPER, I. and HAWTHORNE, C. 2022. Demystifying the black box: the importance of interpretability of predictive models in neurocritical care. Neurocritical care [online], 37(Supplement 2): big data in neurocritical care, pages 185-191. Available from: https://doi.org/10.1007/s12028-022-01504-4
Journal Article Type | Article |
---|---|
Acceptance Date | Mar 29, 2022 |
Online Publication Date | May 6, 2022 |
Publication Date | Aug 31, 2022 |
Deposit Date | May 9, 2022 |
Publicly Available Date | May 9, 2022 |
Journal | Neurocritical care |
Print ISSN | 1541-6933 |
Electronic ISSN | 1556-0961 |
Publisher | Springer |
Peer Reviewed | Peer Reviewed |
Volume | 37 |
Issue | Supplement 2 |
Pages | 185-191 |
DOI | https://doi.org/10.1007/s12028-022-01504-4 |
Keywords | Machine learning; Algorithms; Critical care; Artificial intelligence; Clinical decision-making |
Public URL | https://rgu-repository.worktribe.com/output/1649742 |
Files
MOSS 2022 Demystifying the black box (VOR)
(730 Kb)
PDF
Publisher Licence URL
https://creativecommons.org/licenses/by/4.0/
Copyright Statement
© 2022 The Author(s).
You might also like
Made-up rubbish: design fiction as a tool for participatory Internet of Things research.
(2020)
Journal Article
Challenges of open data quality: more than just license, format, and customer support.
(2017)
Journal Article
Linking open data and the crowd for real-time passenger information.
(2017)
Journal Article
Minimality and simplicity of rules for the internet-of-things.
(2019)
Presentation / Conference Contribution
Downloadable Citations
About OpenAIR@RGU
Administrator e-mail: publications@rgu.ac.uk
This application uses the following open-source libraries:
SheetJS Community Edition
Apache License Version 2.0 (http://www.apache.org/licenses/)
PDF.js
Apache License Version 2.0 (http://www.apache.org/licenses/)
Font Awesome
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2024
Advanced Search