One of the crucial tasks in many inference problems is the extraction of an underlying sparse graphical model from a given number of high-dimensional measurements. In machine learning, this is frequently achieved using, as a penalty term, the Lp norm of the model parameters, with p ≤ 1 for efficient dilution. Here we propose a statistical-mechanics analysis of the problem in the setting of perceptron memorization and generalization. Using a replica approach, we are able to evaluate the relative performance of naive dilution (obtained by learning without dilution, following by applying a threshold to the model parameters), L1 dilution (which is frequently used in convex optimization) and L0 dilution (which is optimal but computationally hard to implement). Whereas both Lp diluted approaches clearly outperform the naive approach, we find a small region where L0 works almost perfectly and strongly outperforms the simpler to implement L1 dilution. In the second part we propose an efficient message-passing strategy in the simpler case of discrete classification vectors, where the norm L0 norm coincides with the L1. Some examples are discussed. ©2009 IEEE.

The importance of dilution in the inference of biological networks / Lage-Castellanos, A.; Pagnani, A.; Weigt, M.. - (2009), pp. 531-538. (Intervento presentato al convegno 2009 47th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2009 tenutosi a Monticello, IL, USA nel 2009) [10.1109/ALLERTON.2009.5394907].

The importance of dilution in the inference of biological networks

Pagnani A.;
2009

Abstract

One of the crucial tasks in many inference problems is the extraction of an underlying sparse graphical model from a given number of high-dimensional measurements. In machine learning, this is frequently achieved using, as a penalty term, the Lp norm of the model parameters, with p ≤ 1 for efficient dilution. Here we propose a statistical-mechanics analysis of the problem in the setting of perceptron memorization and generalization. Using a replica approach, we are able to evaluate the relative performance of naive dilution (obtained by learning without dilution, following by applying a threshold to the model parameters), L1 dilution (which is frequently used in convex optimization) and L0 dilution (which is optimal but computationally hard to implement). Whereas both Lp diluted approaches clearly outperform the naive approach, we find a small region where L0 works almost perfectly and strongly outperforms the simpler to implement L1 dilution. In the second part we propose an efficient message-passing strategy in the simpler case of discrete classification vectors, where the norm L0 norm coincides with the L1. Some examples are discussed. ©2009 IEEE.
2009
978-1-4244-5870-7
File in questo prodotto:
Non ci sono file associati a questo prodotto.
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2936032