Temporal-wise Attention Spiking Neural Networks for Event Streams Classification

Created on 2024-10-10T21:22:42-05:00

Return to the Index

This card pertains to a resource available on the internet.

This card can also be read via Gemini.

Dynamic Vision Sensors: sends packets of position-time-brightness change tuples instead of pushing full video frames.

Event frames are encoded as an X by Y by 2 grid, where a spike indicates an increase or decrease in brightness at the location.

Paper does not attempt to use place cells for positioning and uses backprop through time to train the networks.

Some random combination of nearby frames; didn't quite understand this part.

Attention layer tries to decide if a particular event frame is relevant or not to trigger an update.