Counterfactual explanations focus on 'actionable knowledge' to help end-users understand how a Machine Learning model outcome could be changed to a more desirable outcome. For this purpose a counterfactual explainer needs to be able to reason with similarity knowledge in order to discover input dependencies that relate to outcome changes. Identifying the minimum subset of feature changes to action a change in the decision is an interesting challenge for counterfactual explainers. In this paper we show how feature relevance based explainers (i.e. LIME, SHAP), can inform a counterfactual explainer to identify the minimum subset of 'actionable features'. We demonstrate our DisCERN (Discovering Counterfactual Explanations using Relevance Features from Neighbourhoods) algorithm on three datasets and compare against the widely used counterfactual approach DiCE. Our preliminary results show that DisCERN to be a viable strategy that should be adopted to minimise the actionable changes.
WIRATUNGA, N., WIJEKOON, A., NKISI-ORJI, I., MARTIN, K., PALIHAWADANA, C. and CORSAR, D. 2021. Actionable feature discovery in counterfactuals using feature relevance explainers. In Borck, H., Eisenstadt, V., Sánchez-Ruiz, A. and Floyd, M. (eds.) ICCBR 2021 workshop proceedings (ICCBR-WS 2021): workshop proceedings for the 29th International conference on case-based reasoning co-located with the 29th International conference on case-case based reasoning (ICCBR 2021), 13-16 September 2021, Salamanca, Spain [virtual conference]. CEUR-WS proceedings, 3017. Aachen: CEUR-WS [online], pages 63-74. Available from: http://ceur-ws.org/Vol-3017/101.pdf