The 2018 Kenneth O. Johnson & Steven S. Hsiao Memorial Lecture:
JACK GALLANT, Ph.D.
Chancellor’s Professor of Psychology and Class of 1940 Chair Department of Psychology Affiliate, Electrical Engineering and Computer Science Programs in Neuroscience, Bioengineering, Vision Science & Biophysics University of California at Berkeley
A deep convolutional energy model of ventral stream areas V1, V2 and V4
The ventral stream areas V1, V2 and V4 are crucial for visual object recognition. Good computational models of V1 neurons already exist, but current models of V2 and V4 neurons are poor. To build better models we recorded from neurons while awake animals viewed clips of large, full color natural movies. Because neurons could be recorded for several days, we collected responses to hundreds of thousands (up to over 1 million) distinct movie frames, for hundreds of different V1, V2 and V4 neurons. We fit these data using a new deep convolutional energy model. A two-stage version of the model is used to model V1 and V2, and a three-stage version is used for V4. Deep convolutional energy models fit to V1 and V2 neurons approach the noise-ceiling of prediction performance. Predictions of V4 neuron responses are somewhat lower, but they are as good as the classical model fit to V1 neurons. Furthermore, the model predicts V4 responses to various types of synthetic curvature stimuli in previous studies of V4. Finally, these models can be used to visualize and help interpret the response properties of each neuron. The deep convolutional energy model thus presents a unified framework for modeling and understanding neurons in the early and intermediate ventral stream.