Towards improving open-box hallucination detection in large language models (LLMs).
(2024)
Presentation / Conference Contribution
SURESH, M., ALJUNDI, R., NKISI-ORJI, I. and WIRATUNGA, N. 2024. Towards improving open-box hallucination detection in large language models (LLMs). In Martin, K., Salimi, P. and Wijayasekara, V. (eds.) 2024. SICSA REALLM workshop 2024: proceedings of the SICSA (Scottish Informatics and Computer Science Alliance) REALLM (Reasoning, explanation and applications of large language models) workshop (SICSA REALLM workshop 2024), 17 October 2024, Aberdeen, UK. CEUR workshop proceedings, 3822. Aachen: CEUR-WS [online], pages 1-10. Available from: https://ceur-ws.org/Vol-3822/paper1.pdf
Due to the increasing availability of Large Language Models (LLMs) through both proprietary and open-sourced releases of models, the adoption of LLMs across applications has drastically increased making them commonplace in day-to-day lives. Yet, the... Read More about Towards improving open-box hallucination detection in large language models (LLMs)..