The increasing demand for deep neural networks (DNNs) in resource-constrained systems propels the interest in heavily quantized architectures such as networks with binarized weights. However, despite huge progress in the field, the gap with full-precision performance is far from closed. Today's most effective methods for quantization are rooted in proximal gradient descent theory. In this work, we propose ConQ, a novel concave regularization approach to train effective DNNs with binarized weights. Motivated by theoretical investigation, we argue that the proposed concave regularizer, which allows the removal of the singularity point at 0, presents a more effective shape than previously considered models in terms of accuracy and convergence rate. We present a theoretical convergence analysis of ConQ, with specific i nsights o n b oth c onvex a nd n on-convex s ettings. An extensive experimental evaluation shows that ConQ outperforms the accuracy of competing regularization methods for networks with binarized weights.

ConQ: Binary Quantization of Neural Networks via Concave Regularization / Migliorati, Andrea; Fracastoro, Giulia; Fosson, Sophie; Bianchi, Tiziano; Magli, Enrico. - (2024), pp. 1-6. (Intervento presentato al convegno 34th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2024 tenutosi a London (UK) nel 22-25 September 2024) [10.1109/mlsp58920.2024.10734837].

ConQ: Binary Quantization of Neural Networks via Concave Regularization

Migliorati, Andrea;Fracastoro, Giulia;Fosson, Sophie;Bianchi, Tiziano;Magli, Enrico
2024

Abstract

The increasing demand for deep neural networks (DNNs) in resource-constrained systems propels the interest in heavily quantized architectures such as networks with binarized weights. However, despite huge progress in the field, the gap with full-precision performance is far from closed. Today's most effective methods for quantization are rooted in proximal gradient descent theory. In this work, we propose ConQ, a novel concave regularization approach to train effective DNNs with binarized weights. Motivated by theoretical investigation, we argue that the proposed concave regularizer, which allows the removal of the singularity point at 0, presents a more effective shape than previously considered models in terms of accuracy and convergence rate. We present a theoretical convergence analysis of ConQ, with specific i nsights o n b oth c onvex a nd n on-convex s ettings. An extensive experimental evaluation shows that ConQ outperforms the accuracy of competing regularization methods for networks with binarized weights.
2024
9798350372250
File in questo prodotto:
File Dimensione Formato  
conq_camera_ready_OA.pdf

accesso aperto

Descrizione: author's version
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: Pubblico - Tutti i diritti riservati
Dimensione 476.51 kB
Formato Adobe PDF
476.51 kB Adobe PDF Visualizza/Apri
ConQ_Binary_Quantization_of_Neural_Networks_via_Concave_Regularization.pdf

accesso riservato

Descrizione: published version
Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 545.11 kB
Formato Adobe PDF
545.11 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2995347