Understanding how neurons encode information is one of the most pressing challenges in neuroscience. A new study from a multidisciplinary team including MCB researchers and those from the John A. Paulson School of Engineering and Applied Sciences (SEAS) introduces a novel deep learning-based framework that promises to make sense of the complexity of neural recordings. Their method, called Deconvolutional Unrolled Neural Learning (DUNL), is described in a new paper published in Neuron (PDF).
Multiple, seemingly unrelated factors often modulate the activity of neurons. DUNL enables researchers to break down neural activity into interpretable components, helping to reveal the underlying structure of how neurons respond to different stimuli and events. Unlike traditional methods, which often require averaging responses across multiple trials or animals, DUNL allows for the analysis of single-trial and single-neuron activity. This makes it particularly valuable for understanding the nuanced and heterogeneous activity patterns that underlie cognition and behavior.
A New Approach to Neural Complexity
Traditional methods for analyzing neural data, such as principal component analysis and non-negative matrix factorization, identify broad trends across a population of neurons. However, these approaches struggle to disentangle the contributions of individual neurons, especially when they exhibit mixed selectivity—the ability to encode multiple aspects of behavior, environment, or internal state simultaneously.
“In many cases, a single neuron responds to multiple factors simultaneously, making it difficult to interpret what the neuron is actually encoding,” explains co-author Sara Pinto dos Santos Matias, PhD, a postdoctoral fellow in the MCB lab of Naoshige Uchida, PhD. “Separating these overlapping components is crucial for understanding the underlying computations.”
DUNL addresses this challenge by leveraging algorithm unrolling, a technique that transforms an optimization process into a structured neural network. By using a convolutional generative model, DUNL learns neural response components directly from data while maintaining clear links between model parameters and biological processes.
“Our method models neural activity as a sum of convolutions between impulse-like responses to different events—what we call kernels—and sparse codes that indicate when and how strongly each pattern is activated,” Sara adds. “This gives us a structured and interpretable way to examine neural responses.”
She emphasizes that the fact that DUNL can be applied across different types of data highlights its versatility. “It works with electrophysiology, two-photon imaging, and fluorometry, and it’s useful in both structured, trial-based experiments and more naturalistic settings.”
“This has been an exciting collaboration that combined different expertise between three labs,” adds Uchida, referencing the contributions from SEAS Professor Demba Ba and Paul Masset from the MCB lab of Venkatech Murthy. Masset was a joint postdoctoral fellow in the Uchida/Murthy labs and is now an Assistant Professor at McGill University. “DUNL leverages the powerful computational capabilities of deep learning, a machine learning framework. I am particularly grateful to Prof. Demba Ba and Dr. Paul Masset for their leadership, and Dr. Bahareh Tolooshams (former graduate researcher within SEAS) for co-leading the work with Sara.”
Future Directions and Broader Implications
As neural recording technologies continue to advance, methods like DUNL will be essential for extracting meaningful insights from large-scale datasets. The team is now exploring new applications of the method to further analyze neural computation in naturalistic environments.
“This approach is similar to what is currently used as state-of-the-art machine learning approaches to investigate large language models and what kind of inferences they make,” Sara notes, referencing the sparse autoencoders (e.g. Anthropic) that are used to extract interpretable and meaningful feature components from large language models and enhance their robustness and efficiency. “It’s an exciting direction that bridges neuroscience and artificial intelligence.”