Demystifying the black box: the importance of interpretability of predictive models in neurocritical care.
Moss, Laura; Corsar, David; Shaw, Martin; Piper, Ian; Hawthorne, Christopher
Dr David Corsar firstname.lastname@example.org
Neurocritical care patients are a complex patient population, and to aid clinical decision-making, many models and scoring systems have previously been developed. More recently, techniques from the field of machine learning have been applied to neurocritical care patient data to develop models with high levels of predictive accuracy. However, although these recent models appear clinically promising, their interpretability has often not been considered and they tend to be black box models, making it extremely difficult to understand how the model came to its conclusion. Interpretable machine learning methods have the potential to provide the means to overcome some of these issues but are largely unexplored within the neurocritical care domain. This article examines existing models used in neurocritical care from the perspective of interpretability. Further, the use of interpretable machine learning will be explored, in particular the potential benefits and drawbacks that the techniques may have when applied to neurocritical care data. Finding a solution to the lack of model explanation, transparency, and accountability is important because these issues have the potential to contribute to model trust and clinical acceptance, and, increasingly, regulation is stipulating a right to explanation for decisions made by models and algorithms. To ensure that the prospective gains from sophisticated predictive models to neurocritical care provision can be realized, it is imperative that interpretability of these models is fully considered.
MOSS, L., CORSAR, D., SHAW, M., PIPER, I. and HAWTHORNE, C. 2022. Demystifying the black box: the importance of interpretability of predictive models in neurocritical care. Neurocritical care [online], 37(Supplement 2): big data in neurocritical care, pages 185-191. Available from: https://doi.org/10.1007/s12028-022-01504-4
|Journal Article Type||Article|
|Acceptance Date||Mar 29, 2022|
|Online Publication Date||May 6, 2022|
|Publication Date||Aug 31, 2022|
|Deposit Date||May 9, 2022|
|Publicly Available Date||May 9, 2022|
|Peer Reviewed||Peer Reviewed|
|Keywords||Machine learning; Algorithms; Critical care; Artificial intelligence; Clinical decision-making|
MOSS 2022 Demystifying the black box (VOR)
Publisher Licence URL
You might also like
Facility location problem and permutation flow shop scheduling problem: a linked optimisation problem.
DisCERN: discovering counterfactual explanations using relevance features from neighbourhoods.
Actionable feature discovery in counterfactuals using feature relevance explainers.
Counterfactual explanations for student outcome prediction with Moodle footprints.
Ensemble-based relationship discovery in relational databases.