SPARKS: A Biologically Inspired Neural Attention Model for the Analysis of Sequential Spiking Patterns

Nicolas Skatchkovsky1, Natalia Glazman2, Alexander Egea-Weiss3, Sadra Sadeh1, 2, *, Florencia Iacaruso1, *,
1The Francis Crick Institute, 2Imperial College London, 3University of Basel
Cosyne 2024

*Indicates Equal Contribution
Image description
Image description Image description

Two-dimensional embeddings obtained during the supervised prediction of the position of the hand of monkeys performing a centre-out task (Chowdury et al., 2020).

Abstract

Understanding how the brain represents sensory information and triggers behavioural responses is a fundamental goal in neuroscience. Recent advances in neuronal recording techniques aim to progress towards this milestone, yet the resulting high-dimensional responses are challenging to interpret and link to relevant variables. Although existing machine learning models propose to do so, they often sacrifice interpretability for predictive power, effectively operating as black boxes. In this work, we introduce SPARKS, a biologically inspired model capable of high decoding accuracy and interpretable discovery within a single framework. SPARKS adapts the self-attention mechanism of large language models to extract information from the timing of single spikes and the sequence in which neurons fire using Hebbian learning. Trained with a criterion inspired by predictive coding to enforce temporal coherence, our model produces low-dimensional latent embeddings that are robust across sessions and animals. By directly capturing the underlying data distribution through a generative encoding-decoding framework, SPARKS exhibits state-of-the-art predictive capabilities across diverse electrophysiology and calcium imaging datasets from the motor, visual and entorhinal cortices. Crucially, the Hebbian coefficients learned by the model are interpretable, allowing us to infer the effective connectivity and recover the known functional hierarchy of the mouse visual cortex. Overall, SPARKS unifies representation learning, high-performance decoding and model interpretability in a single framework by bridging neuroscience and AI, providing a powerful and versatile tool for dissecting neural computations and marking a step towards the next generation of biologically inspired intelligent systems.

Overview

Image description
SPARKS (Sequential Predictive Autoencoder for the Representation of spiKing Signals) is a powerful, biologically-inspired AI model designed to analyze complex neural data. Traditional machine learning models often operate as "black boxes," sacrificing interpretability for predictive power. SPARKS overcomes this by unifying high-performance decoding, representation learning, and model interpretability into a single framework. It uses a novel biologically-inspired "Hebbian attention" mechanism to extract meaningful information from the precise timing of individual neuron spikes. This allows SPARKS to generate robust, low-dimensional representations of neural data that are not only highly predictive but also provide interpretable insights into the brain's functional organization.

The Hebbian Attention layer

Image description
At the heart of SPARKS is the Hebbian attention layer, a biologically motivated adaptation of the conventional attention mechanism used in Transformers. This layer allows the model to focus on significant parts of the neural input data, mimicking the way biological systems prioritize certain signals.

Learning and Optimization

Image description
SPARKS utilizes a variational approach to optimize the encoder and decoder networks. By adopting a predictive causally conditioned distribution, inspired by predictive coding theories in neuroscience, the model learns to generate accurate predictions from neural data. This framework also supports training across different sessions and even different animals, allowing for robust learning despite variability in data collection conditions.

Applications and Insights

Two-dimensional embeddings obtained from unsupervised learning of calcium recordings in the medial entorhinal cortex (MEC) of passive mice (Gonzalo Cogno et al., 2024). SPARKS unveils a ring topology from the recording and allowed us to recover the phase of the underlying oscillation in the signal.
Reconstruction of a natural movie shown to passive mice using 100 neurons from the primary visual cortex recorded with a Neuropixels probe (de Vries et al., 2020).
SPARKS has been successfully applied to decode neural signals and predict sensory inputs, such as visual stimuli, with high accuracy. It demonstrates the potential to uncover the temporal dynamics of neural signals across different brain regions, offering valuable insights into brain function. Additionally, the model's ability to handle unsupervised and supervised learning tasks makes it versatile for various neuroscience research applications.

BibTeX

@article {skatchkovsky24sparks,
  author = {Skatchkovsky, Nicolas and Glazman, Natalia and Sadeh, Sadra and Iacaruso, Florencia},
  title = {A Biologically Inspired Attention Model for Neural Signal Analysis},
  elocation-id = {2024.08.13.607787},
  year = {2024},
  doi = {10.1101/2024.08.13.607787},
  publisher = {Cold Spring Harbor Laboratory},
  URL = {https://www.biorxiv.org/content/early/2024/08/16/2024.08.13.607787},
  eprint = {https://www.biorxiv.org/content/early/2024/08/16/2024.08.13.607787.full.pdf},
  journal = {bioRxiv}
}