Surrogate Gradient Learning in Spiking Neural Networks
Created on 2024-10-06T23:18:35-05:00
Emre O. Neftci, Hesham Mostafa, Friedemann Zenke.
Integrate-and-Fire neurons work by accumulating voltage from their underlying neurons (integrate), and "firing" when reaching a threshold, resulting in a large jump of output which decays slowly to a reset value. "Leaky" variants expose some of the voltage downstream. Upon firing all inputs are blocked while the voltage output decays.
(Not in paper: apparently, retinal neurons have a refractory/signal period around 350ms.)
The paper re-formulates handling spike neural nets to recurrent neural networks--treating the state prior to and post firing as time steps just like training an RNN.
Once related to RNNs, the "Leaky Integrate and Fire" function is replaced with functions that approximate the response curves but are a composition of other functions.
The surrogates are somehow used to communicate changes to the hidden neurons behind a spike or the spike itself using virtual weights.