PEDRAM SALIMI p.salimi@rgu.ac.uk
Research Student
PEDRAM SALIMI p.salimi@rgu.ac.uk
Research Student
Professor Nirmalie Wiratunga n.wiratunga@rgu.ac.uk
Associate Dean for Research
Dr David Corsar d.corsar1@rgu.ac.uk
Senior Lecturer
Dr Kyle Martin k.martin3@rgu.ac.uk
Editor
Xiaomeng Ye
Editor
Large Language Models (LLMs) have demonstrated impressive conversational capabilities, yet their susceptibility to hallucinations and inconsistent recommendations poses significant risks in high-stakes domains such as finance. This paper presents an interactive chatbot for loan application guidance that leverages a case-based reasoning (CBR) approach to generate actionable counterfactual explanations within an agentic framework. Our system employs a supervisor agent, built using the LangGraph framework, to orchestrate four specialised agents: a classifier agent that provides an initial loan prediction, a causally-aware counterfactual explanation agent that proposes minimal yet feasible modifications to reverse an unfavourable decision, a Feature Actionability Taxonomy (FAT) agent that updates user-specific immutability constraints based on feedback, and a template-based natural language generation (NLG) agent that transforms counterfactual suggestions into clear, user-friendly explanations. A key strength of our design is the automated feedback loop: when users indicate that certain suggestions are unworkable, the FAT agent revises the constraints and instructs the counterfactual generation agent to produce a refined explanation. We detail the system architecture and workflow and outline an experimental plan that compares our full agentic chatbot to ablated variants and a LLM-Only Baseline. And finally we outline a planned user study to evaluate how controlled reasoning affects trust in high-stakes lending.
SALIMI, P., WIRATUNGA, N. and CORSAR, D. 2025. Agentic CBR in action: empowering loan approvals through interactive, counterfactual explanations. In Martin, K. and Ye, X. (eds.) ICCBR-WS 2025: joint proceedings of the workshops and doctoral consortium at the 33rd International conference on case-based reasoning (ICCBR-WS 2025) co-located with the 33rd International conference on case-based reasoning (ICCBR 2025), 30 June 2025, Biarritz, France. CEUR workshop proceedings, 3993. Aachen: CEUR-WS [online], pages 27-42. Available from: https://ceur-ws.org/Vol-3993/paper3.pdf
Presentation Conference Type | Conference Paper (published) |
---|---|
Conference Name | 33rd International conference on case-based reasoning workshops and doctoral consortium (ICCBR-WS 2025) co-located with the 33rd International conference on case-based reasoning (ICCBR 2025) |
Start Date | Jun 30, 2025 |
Acceptance Date | Apr 6, 2025 |
Online Publication Date | Jun 12, 2025 |
Publication Date | Jul 8, 2025 |
Deposit Date | Jul 31, 2025 |
Publicly Available Date | Jul 31, 2025 |
Publisher | CEUR-WS |
Peer Reviewed | Peer Reviewed |
Pages | 27-42 |
Series Title | CEUR-workshop proceedings |
Series Number | 3993 |
Series ISSN | 1613-0073 |
Book Title | ICCBR-WS 2025 |
Keywords | Conversational AI; Counterfactual explanations; Agentic workflow; CBR; Large language models (LLM); Hallucinations |
Public URL | https://rgu-repository.worktribe.com/output/2941443 |
Publisher URL | https://ceur-ws.org/Vol-3993/ |
SALIMI 2025 Agentic CBR in action (VOR)
(1.5 Mb)
PDF
Publisher Licence URL
https://creativecommons.org/licenses/by/4.0/
Copyright Statement
© 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
Towards feasible counterfactual explanations: a taxonomy guided template-based NLG method.
(2023)
Presentation / Conference Contribution
Explainable weather forecasts through an LSTM-CBR twin system.
(2023)
Presentation / Conference Contribution
Addressing trust and mutability issues in XAI utilising case based reasoning.
(2022)
Presentation / Conference Contribution
Integrating KGs and ontologies with RAG for personalised summarisation in regulatory compliance.
(2024)
Presentation / Conference Contribution
Towards improving open-box hallucination detection in large language models (LLMs).
(2024)
Presentation / Conference Contribution
About OpenAIR@RGU
Administrator e-mail: publications@rgu.ac.uk
This application uses the following open-source libraries:
Apache License Version 2.0 (http://www.apache.org/licenses/)
Apache License Version 2.0 (http://www.apache.org/licenses/)
SIL OFL 1.1 (http://scripts.sil.org/OFL)
MIT License (http://opensource.org/licenses/mit-license.html)
CC BY 3.0 ( http://creativecommons.org/licenses/by/3.0/)
Powered by Worktribe © 2025
Advanced Search