Distilling multi-scale neural mechanisms from diverse unlabeled experimental data using deep domain-adaptive inference framework

Abstract

Behavior and cognition emerge from the complex interplay of neural properties at different scales. However, inferring these multi-scale properties from diverse experimental data remains a classical challenge in computational and systems neuroscience. Advanced machine learning (ML) techniques, such as deep learning and Bayesian inference, have shown promise in addressing this issue. Nonetheless, the performance of ML models trained on synthetic data generated from computational models degrades dramatically on experimental data. To systematically tackle these challenges, we introduce the concept of “out-of-distribution (OOD)” to quantify distributional shift between synthetic and experimental datasets, and propose a deep domain-adaptive inference framework that aligns the distribution of synthetic data with experimental data by minimizing OOD errors. Our framework achieves state-of-the-art performance on a wide range of real experimental data when inferring neural properties at different scales. We demonstrate the efficacy of our framework in two scenarios: inferring detailed biophysical properties at the neuron and microcircuit scales, and inferring monosynaptic connections in hippocampal CA1 networks from in vivo multi-electrode extracellular recordings in free-running mice. Our approach represents a pioneering systematic solution to the OOD problem in neuroscience research and can potentially facilitate bottom-up modeling of large-scale network dynamics underlying brain function and dysfunction. Teaser Our deep domain-adaptive inference framework addresses the out-of-distribution (OOD) problem in inferring multi-scale neural properties from experimental data, enabling state-of-the-art performance and broad implications for neuroscience research.

Publication
bioRxiv
Lei Ma
Lei Ma
Principal Investigator