The advances made by Large Language Models (LLMs) have led to the pursuit of LLM agents that can solve intricate, multi-step reasoning tasks. As with any research pursuit, benchmarking and evaluation are key corner stones to efficient and reliable progress. However, existing benchmarks are often narrow and simply compute overall task success. To face these issues, we propose AgentQuest - a framework where (i) both benchmarks and metrics are modular and easily extensible through well documented and easy-to-use APIs; (ii) we offer two new evaluation metrics that can reliably track LLM agent progress while solving a task. We exemplify the utility of the metrics on two use cases wherein we identify common failure points and refine the agent architecture to obtain a significant performance increase. Together with the research community, we hope to extend AgentQuest further and therefore we make it available under https://github.com/nec-research/agentquest.

AgentQuest: A Modular Benchmark Framework to Measure Progress and Improve LLM Agents / Gioacchini, Luca; Siracusano, Giuseppe; Sanvito, Davide; Gashteovski, Kiril; Friede, David; Bifulco, Roberto; Lawrence, Carolin. - ELETTRONICO. - 3:(2024), pp. 185-193. (Intervento presentato al convegno 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies tenutosi a Mexico City (Mexico) nel June 16-21, 2024) [10.48550/arxiv.2404.06411].

AgentQuest: A Modular Benchmark Framework to Measure Progress and Improve LLM Agents

Luca Gioacchini;
2024

Abstract

The advances made by Large Language Models (LLMs) have led to the pursuit of LLM agents that can solve intricate, multi-step reasoning tasks. As with any research pursuit, benchmarking and evaluation are key corner stones to efficient and reliable progress. However, existing benchmarks are often narrow and simply compute overall task success. To face these issues, we propose AgentQuest - a framework where (i) both benchmarks and metrics are modular and easily extensible through well documented and easy-to-use APIs; (ii) we offer two new evaluation metrics that can reliably track LLM agent progress while solving a task. We exemplify the utility of the metrics on two use cases wherein we identify common failure points and refine the agent architecture to obtain a significant performance increase. Together with the research community, we hope to extend AgentQuest further and therefore we make it available under https://github.com/nec-research/agentquest.
File in questo prodotto:
File Dimensione Formato  
2024.naacl-demo.19.pdf

accesso aperto

Descrizione: Post-print
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Creative commons
Dimensione 593.89 kB
Formato Adobe PDF
593.89 kB Adobe PDF Visualizza/Apri
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2989709