Avoiding Catastrophe: Active Dendrites Enable Multi-Task Learning in Dynamic Environments

Created on 2022-09-09T17:32:27-05:00

Return to the Index

This card pertains to a resource available on the internet.

This card can also be read via Gemini.

Trying to teach the same artificial neural network multiple tasks can result in "catastrophic forgetting" where different tasks interfere with and overwrite the learning.

Quinn: although just brute mass amounts of forcing along with the right linear map can get around this according to the DeepMind GATO paper.

Neuron inhibition is important for the same network learning context-dependent results.

Our initial results show that active dendrites and sparse representations can mitigate catastrophic forgetting and interference in multi-task RL and continual learning settings. One crucial next step is to test this framework on more real-world scenarios with greater complexity than MT10 or permutedMNIST.