Stable Lifelong Learning: Spiking Neurons as a Solution to Instability in Plastic Neural Networks
Created on 2024-10-07T03:10:01-05:00
tl;dr spider robot walk gooder with spike brains.
Observes that spiking neural nets appear to exhibit more lifelong stability than traditional multi-layer perceptrons.
Proposes using evolutionary methods as a means of training the spiked networks rather than backpropagations.
Plastic neural networks using a traditional model fall apart after the amount of time they were trained on, but using spike trains appears to avert the issue entirely.
Plasticity
Hebbian plasticity: pre-synaptic and post-synaptic activity are multiplied to a co-activation matrix, then helement-wise multiplied by a learning rate matrix. The "neurons that fire together wire together" rule.
Oja rule: adapts Hebbian to include a "forgetting" phase which is proportioned to the weight and post-synaptic activity (squared.)
ABCD plasticity: a system where the correlation matrix, pre-synaptic, post-synaptic, bias, and learning rates are treated as parameters that are "learned" along with the rest of the network. In this case the result is "learned plasticity."
STDP (Spike Timing Dependent Plasticity): plasticity matrices are updated as exponentially decaying values based on distance from pre/post spike activity.
Spike generators
Integrate-and-Fire: membrane constant * (delta membrane potential / delta time) = resistance * input. Emits a one when the membrane signal is greater than threshold and zero otherwise.
Leaky integrate and fire: membrane constant * (delta membrane potential / delta time) = resistance * input - floor(membrane potential - resting voltage). Introduces a curve to the spike, such that when it fires the neuron has to