Defining a process model is essential for understanding organizational workflows and guiding future activities. While process discovery techniques can derive models from event logs, designing a process model from scratch remains challenging, as it relies on human experts to interpret requirements. This work explores process model elicitation in the educational domain, where courses in a university program are treated as activities with prerequisite relationships. We propose a methodology leveraging Large Language Models to automatically infer causal dependencies among courses by analyzing their syllabi. Using pairwise prompting with step-back and chain-of-thought reasoning, the extracted dependencies are used to build a Directly-Follows Graph, subsequently evaluated by domain experts. Experimental results show that Large Language Models can effectively identify and justify course dependencies, providing valuable insights that can enhance curriculum design and provide students with clearer guidance on course sequencing. The approach also highlights gaps in prerequisite structures, suggesting a broader application in educational planning and dynamic learning environments.

Leveraging LLMs to Discover Causal Dependencies: A Case Study on a University Program / Diamantini, Claudia; Gobbi, Chiara; Mele, Alessandro; Potena, Domenico; Rossetti, Cristina. - 556:(2025), pp. 237-248. (Intervento presentato al convegno CAiSE 2025 Workshops tenutosi a Vienna (AT) nel June 16–20, 2025) [10.1007/978-3-031-94931-9_19].

Leveraging LLMs to Discover Causal Dependencies: A Case Study on a University Program

Cristina Rossetti
2025

Abstract

Defining a process model is essential for understanding organizational workflows and guiding future activities. While process discovery techniques can derive models from event logs, designing a process model from scratch remains challenging, as it relies on human experts to interpret requirements. This work explores process model elicitation in the educational domain, where courses in a university program are treated as activities with prerequisite relationships. We propose a methodology leveraging Large Language Models to automatically infer causal dependencies among courses by analyzing their syllabi. Using pairwise prompting with step-back and chain-of-thought reasoning, the extracted dependencies are used to build a Directly-Follows Graph, subsequently evaluated by domain experts. Experimental results show that Large Language Models can effectively identify and justify course dependencies, providing valuable insights that can enhance curriculum design and provide students with clearer guidance on course sequencing. The approach also highlights gaps in prerequisite structures, suggesting a broader application in educational planning and dynamic learning environments.
2025
978-3-031-94930-2
978-3-031-94931-9
File in questo prodotto:
File Dimensione Formato  
978-3-031-94931-9_19.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 856.47 kB
Formato Adobe PDF
856.47 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
2025___PMUD_CAiSE.pdf

embargo fino al 14/06/2026

Descrizione: Versione postprint
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 473.4 kB
Formato Adobe PDF
473.4 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3003927