Meta-Learning through Hebbian Plasticity in Random Networks
Created on 2024-10-07T05:02:59-05:00
This paper is mostly meant to evaluate how some living systems are able to develop their instinctive functions quickly (as in how cats are able to learn to walk very fast.) So a mechanism is evolved that turns a specifically shaped neural network in to one that performs a given function regardless of various means of interference with the network itself.
- Create an initial population of neural networks, with random initial weights
- Random "synapse-specific" learning rules are also created for each neuron
- A simulation is run. Each step, neurons are updated using their local learning rules.
- Evolutionary tournaments select better learning rules.
Thus learning rules for each neuron in the system are created such that given any random starting value they will stabilize on the intended task over time.
Furthermore, previous research on the human visual cortex indicates that the representation of visual stimuli in the early regions of the ventral stream are compatible with the representations of convolutional layers trained for image recognition [
There were other papers (somewhere) that suggest not every layer needs to learn; the visual cortex seems to have a visual grammar used to recognize shapes/colors, which is then transmitted via spike messaging, with layers outside of the visual cortex determining what to actually do with these details.
Various brain damage experiments were tried, such as disconnecting part of the tested robot or resetting random neurons to nonsense values. The learned hebbian rules were able to eventually bring various damaged neurons back in to a functioning state.