SPARKS: A Biologically Inspired Neural Attention Model for the Analysis of Sequential Spiking Patterns

Nicolas Skatchkovsky1, Natalia Glazman2, Sadra Sadeh1, 2, *, Florencia Iacaruso1, *,
1The Francis Crick Institute, 2Imperial College London
Cosyne 2024

*Indicates Equal Contribution
Image description
Image description Image description

Two-dimensional embeddings obtained during the supervised prediction of the position of the hand of monkeys performing a centre-out task (Chowdury et al., 2020).

Abstract

Understanding how the brain represents sensory information and triggers behavioural responses is a fundamental goal in neuroscience. Despite advances in neuronal recording techniques, linking the resulting high dimensional responses to relevant variables remains challenging. Inspired by recent progress in machine learning, we propose a novel self-attention mechanism that generates reliable latent representations by sequentially extracting information from the precise timing of single spikes through Hebbian learning. We train a variational autoencoder encompassing the proposed attention layer using an information-theoretic criterion inspired by predictive coding to enforce temporal coherence in the latent representations. The resulting model, SPARKS, produces interpretable embeddings from just tens of neurons, demonstrating robustness across animals and sessions. Through unsupervised and supervised learning, SPARKS generates meaningful low-dimensional representations of high-dimensional recordings and offers state-of-the-art prediction capabilities for behavioural variables on diverse electrophysiology and calcium imaging datasets. Notably, we capture oscillatory sequences from the medial entorhinal cortex (MEC) at unprecedented precision, compare latent representation of natural scenes across sessions and animals, and reveal the hierarchical organisation of the mouse visual cortex from simple datasets. Combining machine learning models with biologically inspired mechanisms, SPARKS provides a promising solution for revealing large-scale network dynamics. Its capacity to generalize across animals and behavioural states suggests SPARKS potential to estimate the animal’s latent generative model of the world.

Overview

Image description
To obtain consistent latent embeddings that capture most of the variance from high dimensional neuronal responses, we have developed a Sequential Predictive Autoencoder for the Representation of Spiking Signals (SPARKS), combining a variational autoencoder with the proposed Hebbian attention layer and predictive learning rule (see details below). The encoder comprises several attention blocks, each composed of an attention layer, followed by a fully connected feedforward network with residual connections and batch normalisation. The first attention block implements our Hebbian self-attention mechanism, with the following blocks implementing conventional dot-product attention. The decoder is a fully connected feedforward neural network, tasked to reconstruct either the input signal to perform unsupervised learning, or to predict a desired reference signal for supervised learning.

The Hebbian Attention layer

Image description
At the heart of SPARKS is the Hebbian attention layer, a biologically motivated adaptation of the conventional attention mechanism used in Transformers. This layer allows the model to focus on significant parts of the neural input data, mimicking the way biological systems prioritize certain signals.

Learning and Optimization

Image description
SPARKS utilizes a variational approach to optimize the encoder and decoder networks. By adopting a predictive causally conditioned distribution, inspired by predictive coding theories in neuroscience, the model learns to generate accurate predictions from neural data. This framework also supports training across different sessions and even different animals, allowing for robust learning despite variability in data collection conditions.

Applications and Insights

Two-dimensional embeddings obtained from unsupervised learning of calcium recordings in the medial entorhinal cortex (MEC) of passive mice (Gonzalo Cogno et al., 2024). Without prior pre-processing, SPARKS unveils a ring topology from the recording and allows us to obtain the phase of the underlying oscillation in the signal.
Reconstruction of a natural movie shown to passive mice using 100 neurons from the primary visual cortex recorded with a Neuropixels probe (de Vries et al., 2020).
SPARKS has been successfully applied to decode neural signals and predict sensory inputs, such as visual stimuli, with high accuracy. It demonstrates the potential to uncover the temporal dynamics of neural signals across different brain regions, offering valuable insights into brain function. Additionally, the model's ability to handle unsupervised and supervised learning tasks makes it versatile for various neuroscience research applications.

BibTeX

@article {skatchkovsky24sparks,
  author = {Skatchkovsky, Nicolas and Glazman, Natalia and Sadeh, Sadra and Iacaruso, Florencia},
  title = {A Biologically Inspired Attention Model for Neural Signal Analysis},
  elocation-id = {2024.08.13.607787},
  year = {2024},
  doi = {10.1101/2024.08.13.607787},
  publisher = {Cold Spring Harbor Laboratory},
  URL = {https://www.biorxiv.org/content/early/2024/08/16/2024.08.13.607787},
  eprint = {https://www.biorxiv.org/content/early/2024/08/16/2024.08.13.607787.full.pdf},
  journal = {bioRxiv}
}