Anuja Vats
This changes to that: combining causal and non-causal explanations to generate disease progression in capsule endoscopy.
Vats, Anuja; Mohammed, Ahmed; Pedersen, Marius; Wiratunga, Nirmalie
Authors
Ahmed Mohammed
Marius Pedersen
Professor Nirmalie Wiratunga n.wiratunga@rgu.ac.uk
Associate Dean for Research
Abstract
Due to the unequivocal need for understanding the decision processes of deep learning networks, both modal-dependent and model-agnostic techniques have become very popular. Although both of these ideas provide transparency for automated decision making, most methodologies focus on either using the modal-gradients (model- dependent) or ignoring the model internal states and reasoning with a model's behavior/outcome (model-agnostic) to instances. In this work, we propose a unified explanation approach that given an instance combines both model-dependent and agnostic explanations to produce an explanation set. The generated explanations are not only consistent in the neighborhood of a sample but can highlight causal relationships between image content and the outcome. We use Wireless Capsule Endoscopy (WCE) domain to illustrate the effectiveness of our explanations. The saliency maps generated by our approach are comparable or better on the softmax information score.
Citation
VATS, A., MOHAMMED, A., PEDERSEN, M. and WIRATUNGA, N. [2023]. This changes to that: combining causal and non-causal explanations to generate disease progression in capsule endoscopy. To be presented at the 2023 IEEE International conference on acoustics, speech and signal processing (ICASSP 2023), 4-10 June 2023, Rhodes Island, Greece.
Conference Name | 2023 IEEE International conference on acoustics, speech and signal processing (ICASSP 2023) |
---|---|
Conference Location | Rhodes Island, Greece |
Start Date | Jun 4, 2023 |
End Date | Jun 10, 2023 |
Acceptance Date | Feb 15, 2023 |
Deposit Date | Feb 17, 2023 |
Publicly Available Date | Feb 17, 2023 |
Publisher | Institute of Electrical and Electronics Engineers |
Keywords | Explainable AI; Counterfactual; Semifactual; Saliency map; Capsule endoscopy |
Public URL | https://rgu-repository.worktribe.com/output/1888154 |
Files
VATS 2023 This changes to that (AAM)
(14.1 Mb)
PDF
Copyright Statement
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
You might also like
iSee: demonstration video. [video recording]
(2023)
Digital Artefact
Clinical dialogue transcription error correction using Seq2Seq models.
(2022)
Conference Proceeding
Adapting semantic similarity methods for case-based reasoning in the Cloud.
(2022)
Conference Proceeding
How close is too close? Role of feature attributions in discovering counterfactual explanations.
(2022)
Conference Proceeding
A case-based approach for content planning in data-to-text generation.
(2022)
Conference Proceeding