Loading Events
  • This event has passed.

Bodian Seminar: Stefan Mihalas

June 3 @ 4:00 pm - 5:00 pm



@


Stefan Mihalas, Ph.D.
Investigator
Allen Institute for Brain Science

Computing with complex components: How heterogeneous, nonstationary and noisy neurons and synapses contribute to the brain’s computational power

While artificial neural networks have taken inspiration from biological ones, one salient difference exists at the level of components. Artificial networks are generally built with homogeneous, stationary and deterministic neurons and synapses. Biological networks are composed of a myriad of cell types, with neurons and synapses having heterogeneous transfer functions, which are non-stationary in time and highly stochastic. It seems difficult to imagine how such messy components can be used to compute.

In this talk I will show that each of these properties can be used to benefit the computations:
1. The heterogeneity can allow a network to better approximate a function with fewer units.
2. The non-stationarity of transfer functions for either neurons or synapses can be used as a form of long-short term memory. Such networks compute in a very distinct manner than RNNs. I will show that for a wide variety of cognitive tasks, artificial neural networks with complex, nonstationary synapses significantly outperform parameter-matched RNN and GRU networks, reaching similar performance with 10 times less parameters. More importantly for neuroscience, networks with nonstationary components which are trained on similar tasks to mice end up with similar patterns of neuronal activity and they end up making similar mistakes when surprised by unexpected stimuli.
3. Another surprise of biological neural networks is the high level of variability. I will show that while each individual neuron in the brain is highly variable, when observed at a population level, the noise spans a low dimensional manifold. Based on electrophysiological recordings in the mouse visual cortex, I will show that this manifold is aligned in the directions of smooth transforms in the environment, directions which are useful to build an invariance over. Thus the variability can be useful to mark directions useful for generalization when doing 1 or few shot learning. I will show how the geometry of variability changes with familiarity, vigilance and selective attention in recordings from mice and humans doing behavioral tasks, and all these changes are consistent with the role of variability being to help generalization.

Taken together these results paint a picture in which the diverse, constantly changing and often stochastic characteristics of biological neurons and synapses, when properly combined, can help networks perform etologically relevant computations. I will end by describing how such complex components can be the fundamental building blocks for foundational models in systems neuroscience.

Faculty Host: Ernst Niebur

Details

Date:
June 3
Time:
4:00 pm - 5:00 pm
Event Category:
Website:
https://krieger.jhu.edu/mbi/event/bodian-seminar-stefan-mihalas/