The widespread availability of pre-trained vision models has enabled numerous deep learning applications through their transferable representations. However, their computational and storage costs often limit practical deployment. Pruning-at-Initialization has emerged as a promising approach to compress models before training, enabling efficient task-specific adaptation. While conventional wisdom suggests that effective pruning requires task-specific data, this creates a challenge when downstream tasks are unknown in advance. In this paper, we investigate how data influences the pruning of pre-trained vision models. Surprisingly, pruning on one task retains the model’s zero-shot performance also on unseen tasks. Furthermore, fine-tuning these pruned models not only improves performance on original seen tasks but can recover held-out tasks’ performance. We attribute this phenomenon to the favorable loss landscapes induced by extensive pre-training on large-scale datasets.

A Second-Order Perspective on Pruning at Initialization and Knowledge Transfer / Iurada, Leonardo; Occhiena, Beatrice; Tommasi, Tatiana. - 16167:(2026), pp. 194-206. ( 23rd International Conference on Image Analysis and Processing, ICIAP 2025 Rome(ITA) September 15–19, 2025) [10.1007/978-3-032-10185-3_16].

A Second-Order Perspective on Pruning at Initialization and Knowledge Transfer

Iurada, Leonardo;Occhiena, Beatrice;Tommasi, Tatiana
2026

Abstract

The widespread availability of pre-trained vision models has enabled numerous deep learning applications through their transferable representations. However, their computational and storage costs often limit practical deployment. Pruning-at-Initialization has emerged as a promising approach to compress models before training, enabling efficient task-specific adaptation. While conventional wisdom suggests that effective pruning requires task-specific data, this creates a challenge when downstream tasks are unknown in advance. In this paper, we investigate how data influences the pruning of pre-trained vision models. Surprisingly, pruning on one task retains the model’s zero-shot performance also on unseen tasks. Furthermore, fine-tuning these pruned models not only improves performance on original seen tasks but can recover held-out tasks’ performance. We attribute this phenomenon to the favorable loss landscapes induced by extensive pre-training on large-scale datasets.
2026
9783032101846
9783032101853
File in questo prodotto:
File Dimensione Formato  
Second_Order_Pruning_ICIAP.pdf

embargo fino al 02/01/2027

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 1.07 MB
Formato Adobe PDF
1.07 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
978-3-032-10185-3_16.pdf

accesso riservato

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1.45 MB
Formato Adobe PDF
1.45 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/3008748