Recently, the question of what types of computation and cognition large language models (LLMs) are capable of has received increasing attention. With models clearly capable of convincingly faking true reasoning behavior, the question of whether they are also capable of real reasoning - and how the difference should be defined - becomes increasingly vexed. Here we introduce a new tool,Logic Tensor Probes (LTP), that may help to shed light on the problem. Logic Tensor Networks (LTN) serve as a neural symbolic framework designed for differentiable fuzzy logics. Using a pretrained LLM with frozen weights, an LTP uses the LTN framework as a diagnostic tool. This allows for the detection and localization of logical deductions within LLMs, enabling the use of first-order logic as a versatile modeling language for investigating the internal mechanisms of LLMs. The LTP can make deductions from basic assertions, and track if the model makes the same deductions from the natural language equivalent, and if so, where in the model this happens. We validate our approach through proof-of-concept experiments on hand-crafted knowledge bases derived from WordNet and on smaller samples from FrameNet.

Probing LLMs for logical reasoning / Manigrasso, Francesco; Schouten, Stefan; Morra, Lia; Peter Bloem, And. - ELETTRONICO. - 14979:(2024), pp. 257-278. (Intervento presentato al convegno 18th International Conference on Neuro-symbolic Learning and Reasoning tenutosi a Barcelona (ESP) nel September 9–12, 2024) [10.1007/978-3-031-71167-1_14].

Probing LLMs for logical reasoning

Francesco Manigrasso;Lia Morra;
2024

Abstract

Recently, the question of what types of computation and cognition large language models (LLMs) are capable of has received increasing attention. With models clearly capable of convincingly faking true reasoning behavior, the question of whether they are also capable of real reasoning - and how the difference should be defined - becomes increasingly vexed. Here we introduce a new tool,Logic Tensor Probes (LTP), that may help to shed light on the problem. Logic Tensor Networks (LTN) serve as a neural symbolic framework designed for differentiable fuzzy logics. Using a pretrained LLM with frozen weights, an LTP uses the LTN framework as a diagnostic tool. This allows for the detection and localization of logical deductions within LLMs, enabling the use of first-order logic as a versatile modeling language for investigating the internal mechanisms of LLMs. The LTP can make deductions from basic assertions, and track if the model makes the same deductions from the natural language equivalent, and if so, where in the model this happens. We validate our approach through proof-of-concept experiments on hand-crafted knowledge bases derived from WordNet and on smaller samples from FrameNet.
2024
978-3-031-71166-4
978-3-031-71167-1
File in questo prodotto:
File Dimensione Formato  
LTNsAsProbes (3).pdf

embargo fino al 10/09/2025

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 1.47 MB
Formato Adobe PDF
1.47 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
978-3-031-71167-1_14.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 865.27 kB
Formato Adobe PDF
865.27 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2989997