Neural networks are able to extract information from the timing of spikes. Here we provide new results on the behavior of the simplest neuronal model which is able to decode information embedded in temporal spike patterns, the so-called tempotron. Using statistical physics techniques we compute the capacity for the case of sparse, time-discretized input, and 'material' discrete synapses, showing that the device saturates the information theoretic bounds with a statistics of output spikes that is consistent with the statistics of the inputs. We also derive two simple and highly efficient learning algorithms which are able to learn a number of associations which are close to the theoretical limit. The simplest versions of these algorithms correspond to distributed online protocols of interest for neuromorphic devices, and can be adapted to address the more biologically relevant continuous-time version of the classification problem, hopefully allowing the understanding of some aspects of synaptic plasticity.
|Titolo:||Theory and learning protocols for the material tempotron model|
|Data di pubblicazione:||2013|
|Digital Object Identifier (DOI):||10.1088/1742-5468/2013/12/P12013|
|Appare nelle tipologie:||1.1 Articolo in rivista|