This paper investigates the use of large language models (LLMs) in analysing and answering questions related to banking supervisory regulations. We propose a multi-step-prompt approach that enriches the context provided to the LLM with relevant articles from the Capital Requirements Regulation (CRR). We compare our method against standard ‘zero-shot’ prompting, where the LLM answers are solely based on its pre-trained knowledge, and a standard ‘few-shot’ prompting, where the LLM is given only a limited number of examples of questions and answers to draw on each time. To assess the quality of the answers returned by the LLM, we also build an ‘LLM evaluator’ which, for each question, compares the correctness and completeness of the answer resulting from our multi-step prompt approach and from the two standard prompting methods with the official answer made available by the European Banking Authority (EBA), which is taken as a benchmark. Our findings on inquiries concerning Liquidity Risk rules indicate that our multi-step approach significantly improves the quality of LLM-generated answers, offering the analyst a valuable starting point to formulate appropriate answers to particularly complex questions.

A Novel Multi-Step-Prompt Approach for LLM-based Q&As on Banking Supervisory Regulations

Daniele Licari
Conceptualization
;
Praveen Bushipaka
Software
;
Alessandro De Gregorio;Tommaso Cucinotta
Supervision
2025-01-01

Abstract

This paper investigates the use of large language models (LLMs) in analysing and answering questions related to banking supervisory regulations. We propose a multi-step-prompt approach that enriches the context provided to the LLM with relevant articles from the Capital Requirements Regulation (CRR). We compare our method against standard ‘zero-shot’ prompting, where the LLM answers are solely based on its pre-trained knowledge, and a standard ‘few-shot’ prompting, where the LLM is given only a limited number of examples of questions and answers to draw on each time. To assess the quality of the answers returned by the LLM, we also build an ‘LLM evaluator’ which, for each question, compares the correctness and completeness of the answer resulting from our multi-step prompt approach and from the two standard prompting methods with the official answer made available by the European Banking Authority (EBA), which is taken as a benchmark. Our findings on inquiries concerning Liquidity Risk rules indicate that our multi-step approach significantly improves the quality of LLM-generated answers, offering the analyst a valuable starting point to formulate appropriate answers to particularly complex questions.
2025
Licari, Daniele; Benedetto, Canio; Bovi, Daniele; Bushipaka, Praveen; De Gregorio, Alessandro; De Leonardis, Marco; Cucinotta, Tommaso
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1326321
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact