How can we better understand the physics of neural computation?
While synaptic, genetic, and dendritic mechanisms involved in many neurological phenomena—such as learning and memory—have been uncovered, a thorough, comprehensive theory of how these mechanisms interact to generate specific neural functions—such as vision and spatial navigation—have been elusive. A general theoretical framework may provide a foundation for rigorously describing and understanding these neural systems from its cell-level neurobiology to its computational phenomena. While single-neuron models have been elucidated, a large-scale perspective may be required to understand certain aspects of neural dynamics. I develop statistical and machine learning methods to understand neural phenomena and mathematically formalize these rules to discover nontrivial relationships between different neural information processing systems.
I am interested in using my background in machine learning and condensed matter physics to gain a deeper understanding of how macroscopic patterns in neural activity and organization can be explained by fundamental relationships between microscopic neuronal parameters such as synaptic dynamics, neuronal morphology, and genetic expression.
Dawna Bagherian, James Gornet, Jeremy Bernstein, Yu-Li Ni, Yisong Yue, Markus Meister. “Fine-Grained System Identification of Nonlinear Neural Circuits,” in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021. [SIGKDD] [ArXiv]
Yujia Huang, James Gornet, Sihui Dai, Zhiding Yu, Doris Y. Tsao, Anima Anandkumar. “Neural Networks with Recurrent Generative Feedback,” in Advances in Neural Information Processing Systems 32, 2020. [NeurIPS] [ArXiV] [Full-Text]
James Gornet, Kannan Umadevi Venkataraju, Arun Narasimhan, Nicholas Turner, H. Sebastian Seung, Pavel Osten, Uygar Sümbül. “Reconstructing neuronal anatomy from whole-brain images,” in 2019 IEEE 16th International Symposium for Biomedical Imaging, 2019. [ISBI] [ArXiV] [Full-Text]