The paper investigates the epistemic and operational shift from manual drawing to AI-driven image generation in architectural education and design. Within two multidisciplinary laboratories at the Politecnico di Torino—one BIM-based and one focused on urban heritage—the study analyses how students employ generative tools (text-to-image and image-to-image) to translate conceptual intentions into visual representations. Through a structured methodology combining semantic analysis, information decomposition, and a Level of Relevance (LoR) metric, forty prompt–output pairs were evaluated across contextual, formal, and technological dimensions. Results reveal that generative imagery excels at prefiguring form, atmosphere, and spatial scenarios but remains weak in encoding material, constructive, and historical information. Prompt precision and the use of reference images improve stylistic alignment yet do not ensure technical plausibility. Conceptually, AI emerges as a cognitive prosthesis that augments, rather than replaces, design reasoning—requiring human agency for interpretation, validation, and contextual grounding. The paper argues for a supervised, human-in-the-loop approach where drawing, prompting, and critical verification form a reflective continuum toward responsible and informed architectural creativity.
From manual drawing to AI-driven drawing: Evolution and Perspectives in Architectural Design / Zucco, Michele; Osello, Anna - In: Ethics and Aesthetics of Artifical Images / Arielli E. (a cura di). - ELETTRONICO. - [s.l] : Orthotes edizioni, In corso di stampa. - ISBN 978-88-9314-548-0.
From manual drawing to AI-driven drawing: Evolution and Perspectives in Architectural Design
Michele Zucco;Anna Osello
In corso di stampa
Abstract
The paper investigates the epistemic and operational shift from manual drawing to AI-driven image generation in architectural education and design. Within two multidisciplinary laboratories at the Politecnico di Torino—one BIM-based and one focused on urban heritage—the study analyses how students employ generative tools (text-to-image and image-to-image) to translate conceptual intentions into visual representations. Through a structured methodology combining semantic analysis, information decomposition, and a Level of Relevance (LoR) metric, forty prompt–output pairs were evaluated across contextual, formal, and technological dimensions. Results reveal that generative imagery excels at prefiguring form, atmosphere, and spatial scenarios but remains weak in encoding material, constructive, and historical information. Prompt precision and the use of reference images improve stylistic alignment yet do not ensure technical plausibility. Conceptually, AI emerges as a cognitive prosthesis that augments, rather than replaces, design reasoning—requiring human agency for interpretation, validation, and contextual grounding. The paper argues for a supervised, human-in-the-loop approach where drawing, prompting, and critical verification form a reflective continuum toward responsible and informed architectural creativity.Pubblicazioni consigliate
I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.
https://hdl.handle.net/11583/3010229
Attenzione
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
