The ℓ0/ℓ1-regularized least-squares approach is used to deal with linear inverse problems under sparsity constraints, which arise in mathematical and engineering fields. In particular, multiagent models have recently emerged in this context to describe diverse kinds of networked systems, ranging from medical databases to wireless sensor networks. In this paper, we study methods for solving ℓ0/ℓ1-regularized leastsquares problems in such multiagent systems. We propose a novel class of distributed protocols based on iterative thresholding and input driven consensus techniques, which are well-suited to work in-network when the communication to a central processing unit is not allowed. Estimation is performed by the agents themselves, which typically consist of devices with limited computational capabilities. This motivates us to develop low-complexity and low-memory algorithms that are feasible in real applications. Our main result is a rigorous proof of the convergence of these methods in regular networks. We introduce a suitable distributed, regularized, least-squares functional, and we prove that our algorithms reach their minima using results from dynamical systems theory. Furthermore, we propose numerical comparisons with the alternating direction method of multipliers and the distributed subgradient methods, in terms of performance, complexity, and memory usage. We conclude that our techniques are preferable for their good memory-accuracy tradeoff.

Distributed iterative thresholding for ℓ0/ℓ1-regularized linear inverse problems / Ravazzi, Chiara; Fosson, Sophie; Magli, Enrico. - In: IEEE TRANSACTIONS ON INFORMATION THEORY. - ISSN 0018-9448. - STAMPA. - 61:4(2015), pp. 2081-2100. [10.1109/TIT.2015.2403263]

Distributed iterative thresholding for ℓ0/ℓ1-regularized linear inverse problems

RAVAZZI, CHIARA;FOSSON, SOPHIE;MAGLI, ENRICO
2015

Abstract

The ℓ0/ℓ1-regularized least-squares approach is used to deal with linear inverse problems under sparsity constraints, which arise in mathematical and engineering fields. In particular, multiagent models have recently emerged in this context to describe diverse kinds of networked systems, ranging from medical databases to wireless sensor networks. In this paper, we study methods for solving ℓ0/ℓ1-regularized leastsquares problems in such multiagent systems. We propose a novel class of distributed protocols based on iterative thresholding and input driven consensus techniques, which are well-suited to work in-network when the communication to a central processing unit is not allowed. Estimation is performed by the agents themselves, which typically consist of devices with limited computational capabilities. This motivates us to develop low-complexity and low-memory algorithms that are feasible in real applications. Our main result is a rigorous proof of the convergence of these methods in regular networks. We introduce a suitable distributed, regularized, least-squares functional, and we prove that our algorithms reach their minima using results from dynamical systems theory. Furthermore, we propose numerical comparisons with the alternating direction method of multipliers and the distributed subgradient methods, in terms of performance, complexity, and memory usage. We conclude that our techniques are preferable for their good memory-accuracy tradeoff.
File in questo prodotto:
File Dimensione Formato  
preprint.pdf

accesso aperto

Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 1.12 MB
Formato Adobe PDF
1.12 MB Adobe PDF Visualizza/Apri
07041200.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 2.5 MB
Formato Adobe PDF
2.5 MB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2625807
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo