Modern deep neural networks (DNNs) represent a formidable challenge for theorists: according to the commonly accepted probabilistic framework that describes their performance, these architectures should overfit due to the huge number of parameters to train, but in practice they do not. Here we employ results from replica mean field theory to compute the generalization gap of machine learning models with quenched features, in the teacher-student scenario and for regression problems with quadratic loss function. Notably, this framework includes the case of DNNs where the last layer is optimized given a specific realization of the remaining weights. We show how these results—combined with ideas from statistical learning theory—provide a stringent asymptotic upper bound on the generalization gap of fully trained DNN as a function of the size of the dataset P. In particular, in the limit of large P and Nout (where Nout is the size of the last layer) and Nout≪P, the generalization gap approaches zero faster than 2Nout/P, for any choice of both architecture and teacher function. Notably, this result greatly improves existing bounds from statistical learning theory. We test our predictions on a broad range of architectures, from toy fully connected neural networks with few hidden layers to state-of-the-art deep convolutional neural networks.

Universal mean-field upper bound for the generalization gap of deep neural networks / Ariosto, S.; Pacelli, R.; Ginelli, F.; Gherardi, M.; Rotondo, P.. - In: PHYSICAL REVIEW. E. - ISSN 2470-0053. - 105:6(2022). [10.1103/PhysRevE.105.064309]

Universal mean-field upper bound for the generalization gap of deep neural networks

R. Pacelli;M. Gherardi;
2022

Abstract

Modern deep neural networks (DNNs) represent a formidable challenge for theorists: according to the commonly accepted probabilistic framework that describes their performance, these architectures should overfit due to the huge number of parameters to train, but in practice they do not. Here we employ results from replica mean field theory to compute the generalization gap of machine learning models with quenched features, in the teacher-student scenario and for regression problems with quadratic loss function. Notably, this framework includes the case of DNNs where the last layer is optimized given a specific realization of the remaining weights. We show how these results—combined with ideas from statistical learning theory—provide a stringent asymptotic upper bound on the generalization gap of fully trained DNN as a function of the size of the dataset P. In particular, in the limit of large P and Nout (where Nout is the size of the last layer) and Nout≪P, the generalization gap approaches zero faster than 2Nout/P, for any choice of both architecture and teacher function. Notably, this result greatly improves existing bounds from statistical learning theory. We test our predictions on a broad range of architectures, from toy fully connected neural networks with few hidden layers to state-of-the-art deep convolutional neural networks.
File in questo prodotto:
File Dimensione Formato  
universal.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 1 MB
Formato Adobe PDF
1 MB Adobe PDF Visualizza/Apri
PhysRevE.105.064309.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 1.46 MB
Formato Adobe PDF
1.46 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2983564