This changes to that: combining causal and non-causal explanations to generate disease progression in capsule endoscopy.
Vats, Anuja; Mohammed, Ahmed; Pedersen, Marius; Wiratunga, Nirmalie
Professor Nirmalie Wiratunga firstname.lastname@example.org
Associate Dean for Research
Due to the unequivocal need for understanding the decision processes of deep learning networks, both modal-dependent and model-agnostic techniques have become very popular. Although both of these ideas provide transparency for automated decision making, most methodologies focus on either using the modal-gradients (model- dependent) or ignoring the model internal states and reasoning with a model's behavior/outcome (model-agnostic) to instances. In this work, we propose a unified explanation approach that given an instance combines both model-dependent and agnostic explanations to produce an explanation set. The generated explanations are not only consistent in the neighborhood of a sample but can highlight causal relationships between image content and the outcome. We use Wireless Capsule Endoscopy (WCE) domain to illustrate the effectiveness of our explanations. The saliency maps generated by our approach are comparable or better on the softmax information score.
VATS, A., MOHAMMED, A., PEDERSEN, M. and WIRATUNGA, N. . This changes to that: combining causal and non-causal explanations to generate disease progression in capsule endoscopy. To be presented at the 2023 IEEE International conference on acoustics, speech and signal processing (ICASSP 2023), 4-10 June 2023, Rhodes Island, Greece.
|Conference Name||2023 IEEE International conference on acoustics, speech and signal processing (ICASSP 2023)|
|Conference Location||Rhodes Island, Greece|
|Start Date||Jun 4, 2023|
|End Date||Jun 10, 2023|
|Acceptance Date||Feb 15, 2023|
|Deposit Date||Feb 17, 2023|
|Publicly Available Date||Feb 17, 2023|
|Publisher||Institute of Electrical and Electronics Engineers|
|Keywords||Explainable AI; Counterfactual; Semifactual; Saliency map; Capsule endoscopy|
VATS 2023 This changes to that (AAM)
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
You might also like
iSee: demonstration video. [video recording]
Clinical dialogue transcription error correction using Seq2Seq models.
Adapting semantic similarity methods for case-based reasoning in the Cloud.
How close is too close? Role of feature attributions in discovering counterfactual explanations.
A case-based approach for content planning in data-to-text generation.