Skip to main content

Research Repository

Advanced Search

MALAVIKA SURESH's Outputs (4)

Towards improving open-box hallucination detection in large language models (LLMs). (2024)
Presentation / Conference Contribution
SURESH, M., ALJUNDI, R., NKISI-ORJI, I. and WIRATUNGA, N. 2024. Towards improving open-box hallucination detection in large language models (LLMs). In Martin, K., Salimi, P. and Wijayasekara, V. (eds.) 2024. SICSA REALLM workshop 2024: proceedings of the SICSA (Scottish Informatics and Computer Science Alliance) REALLM (Reasoning, explanation and applications of large language models) workshop (SICSA REALLM workshop 2024), 17 October 2024, Aberdeen, UK. CEUR workshop proceedings, 3822. Aachen: CEUR-WS [online], pages 1-10. Available from: https://ceur-ws.org/Vol-3822/paper1.pdf

Due to the increasing availability of Large Language Models (LLMs) through both proprietary and open-sourced releases of models, the adoption of LLMs across applications has drastically increased making them commonplace in day-to-day lives. Yet, the... Read More about Towards improving open-box hallucination detection in large language models (LLMs)..

Detecting contradictory COVID-19 drug efficacy claims from biomedical literature. (2023)
Presentation / Conference Contribution
SOSA, D.N., SURESH, M., POTTS, C. and ALTMAN, R.B. 2023. Detecting contradictory COVID-19 drug efficacy claims from biomedical literature. In Rogers, A., Boyd-Graber, J. and Okazaki, N. (eds.) Proceedings of the 61st Association for Computational Linguistics annual meeting 2023 (ACL 2023), 9-14 July 2023, Toronto, Candada. Stroudsburg, PA: ACL [online], volume 2: short papers, pages 694-713. Available from: https://doi.org/10.18653/v1/2023.acl-short.61

The COVID-19 pandemic created a deluge of questionable and contradictory scientific claims about drug efficacy – an "infodemic" with lasting consequences for science and society. In this work, we argue that NLP models can help domain experts distill... Read More about Detecting contradictory COVID-19 drug efficacy claims from biomedical literature..

Explainable weather forecasts through an LSTM-CBR twin system. (2023)
Presentation / Conference Contribution
PIRIE, C., SURESH, M., SALIMI, P., PALIHAWADANA, C. and NANAYAKKARA, G. 2022. Explainable weather forecasts through an LSTM-CBR twin system. In Reuss, P. and Schönborn, J. (eds.) Workshop proceedings of the 30th International conference on case-based reasoning (ICCBR-WS 2022), 12-15 September 2022, Nancy, France. CEUR workshop proceedings, 3389. Aachen: CEUR-WS [online], pages 256-260. Available from: https://ceur-ws.org/Vol-3389/ICCBR_2022_XCBR_Challenge_RGU.pdf

In this paper, we explore two methods for explaining LSTM-based temperature forecasts using previous 14 day progressions of humidity and pressure. First, we propose and evaluate an LSTM-CBR twin system that generates nearest-neighbors that can be vis... Read More about Explainable weather forecasts through an LSTM-CBR twin system..

CBR for interpretable response selection in conversational modelling. (2022)
Presentation / Conference Contribution
SURESH, M. 2022. CBR for interpretable response selection in conversational modelling. In Reuss, P. and Schönborn, J. (eds.) Proceedings of the 30th Doctoral consortium of the international conference on case-based reasoning (ICCBR-DC 2022), co-located with the 30th International conference on case-based reasoning (ICCBR 2022), 12-15 September 2022, Nancy, France. CEUR workshop proceedings, 3418. Aachen: CEUR-WS [online], pages 28-33. Available from: https://ceur-ws.org/Vol-3418/ICCBR_2022_DC_paper25.pdf

Current state-of-the-art dialogue systems are increasingly complex. When used in applications such as motivational interviewing, the lack of interpretability is a concern. CBR offers to bridge this gap by using the most similar past cases to decide t... Read More about CBR for interpretable response selection in conversational modelling..