Object And Scene Vision: Population Coding, Algorithms, Deep Networks, Prosthetics
- Deciphering neural population codes for structure, material, physics, utility
- Tracing neural algorithms for transforming images into visual information
- Using biological principles to advance deep network computer vision
- Using coding principles to design prosthetic interfaces
Vision is your superpower. At a glance, you can tell where you are, what is around you, what just happened, and what is about to happen. You effortlessly perceive the precise 3D structure of things in your environment, at distances ranging from millimeters to miles. You know what things are called, how valuable they are, how old or new, fresh or rotten, strong or weak. You intuit their material, mechanical, and energetic properties, allowing you to anticipate and alter physical events. You effectively read the minds of other humans and animals based on tiny variations in facial configuration and body pose. A picture is worth many times a thousand words to you.
All this information seems to exist outside you, immediately and effortlessly available. Understanding what you see seems trivial—you only have to look at it! We are so good at vision that we tend not to recognize it as an ability or a process of any kind. But in fact it is one of the most difficult things the brain does. Computer vision, even with deep networks, has not even begun to approach the kind of visual understanding that comes so easily to us. Computers can beat us at math, chess, go, and jeopardy, but they cannot understand the visual world the way we do.
Our visual appreciation of the world emerges from networks of billions of neurons in the ventral visual pathway of the brain. Our lab studies neural information processing in the intermediate and higher level stages of this pathway. We want to understand how the ventral pathway transforms images into knowledge about the world. Images are just 2D arrays of numerical values corresponding to pigment intensities, pixel colors, or photoreceptor activations in the eye. You could not deduce what is in an image from the numerical values themselves, but that is effectively what the brain must do with neural signals coming from the eyes. If we could decipher how the brain does this on the algorithmic level, we could use the same principles to build computer vision systems with human-like capabilities. We could develop prosthetic interfaces for blind patients that hijack the mechanisms of the ventral pathway to induce vivid visual experiences. And, we would understand the substrate for our rich, detailed, aesthetic experiences of the visual world.
Sasikumar, D., Emeric, E., Stuphorn, V., & Connor, C.E. (2018). First-pass processing of value cues in the ventral visual pathway. Current Biology 28: 538–548.
Connor, C.E., & Knierim, J.J. (2017). Integration of objects and space in perception and memory. Nature Neuroscience 20: 1493–1503.
Vaziri, S., & Connor, C. E. (2016). Representation of Gravity-Aligned Scene Structure in Ventral Pathway Visual Cortex. Current Biology 26: 766-774.
Connor, C. E., & Stuphorn, V. (2015). The Decision Path Not Taken. Neuron 87: 1128–1130.
Vaziri, S., Carlson, E.T., Wang, Z., & Connor, C.E. (2014). A channel for 3D environmental shape in anterior inferotemporal cortex. Neuron 84: 55–62. PMCID: PMC4247160.
Connor, C. E. (2014). Cortical geography is destiny. Nature Neuroscience 17: 1631–1632.
Yau, J.M., Connor, C.E., & Hsiao, S.S. (2013). Representation of tactile curvature in macaque somatosensory area 2. Journal of Neurophysiology 109: 2999–3012.
Hung, C.-C., Carlson, E.T., Connor, C.E. (2012) Medial axis shape coding in macaque inferotemporal cortex. Neuron 74: 1099–1113.
Yau, J.M., Pasupathy, A., Brincat, S.L., Connor C.E. (2012) Curvature processing dynamics in macaque area V4. Cerebral Cortex 23: 198-209.
Roe, A.W., Chelazzi, L., Connor, C.E., Conway, B.R., Fujita, I., Gallant, J.L., Lu, H., Vanduffel, W. (2012) Toward a unified theory of visual area V4. Neuron 74: 12–29.
Carlson, E.T., Rasquinha, R.J., Zhang, K., Connor, C.E. (2011) A sparse object coding scheme in area V4. Current Biology 21: 288–293.
Kourtzi, Z. & Connor, C.E. (2011) Neural representations for object perception: structure, category, and adaptive coding. In: Annual Review of Neuroscience 34: 45–67.
Connor, C.E. (2010) A new viewpoint on faces. Science 330: 764–765.
Yau, J.M., Pasupathy, A., Fitzgerald, P.J., Hsiao, S.S. & Connor, C.E. (2009) Analogous intermediate shape coding in vision and touch. PNAS 106: 16457–16462.
Connor, C.E., Pasupathy, A., Brincat, S. & Yamane Y. (2009) Neural transformation of object information by ventral pathway visual cortex. In: The Cognitive Neurosciences IV, Gazzaniga, M.S., ed., MIT Press, Cambridge MA.
Yamane, Y., Carlson, E.T., Bowman, K.C., Wang, Z. & Connor, C.E. (2008) A neural code for three-dimensional object shape in macaque inferotemporal cortex. Nature Neuroscience 11: 1352–1360.
Vaziri, S.L., Pasupathy, A., Brincat, S.L. & Connor, C.E. (2008) Structural representation of object shape in the brain. In: Object Categorization: Computer and Human Vision Perspectives, Cambridge University Press.
Connor, C.E. (2008) Visual object representation. In: Encyclopedia of Neuroscience, Binder, M.D., Hirokawa, N. & Windhorst, U., eds., Springer-Verlag, Heidelberg.
Cadieu, C., Kouh, M., Pasupathy, A., Connor, C.E., Riesenhuber, M. & Poggio, T. (2007) A model of V4 shape selectivity and invariance. Journal of Neurophysiology 98: 1733–1750.
Connor, C.E. (2007) Transformation of shape information in the ventral pathway. Current Opinion in Neurobiology 17: 140–147.
Brincat, S.L. & Connor, C.E. (2006) Dynamic shape synthesis in posterior inferotemporal cortex. Neuron 49: 17–24.
Connor, C.E. (2006) Attention: beyond neural response increases. Nature Neuroscience 9: 1083–1084.
Hinkle, D.A. & Connor, C.E. (2005) Quantitative characterization of disparity tuning in ventral pathway area V4. Journal of Neurophysiology 94: 2726–2737.
Connor, C.E. (2005) Friends and grandmothers. Nature 435: 1036–1037.
Brincat, S.L. & Connor, C.E. (2004) Underlying principles of visual shape selectivity in posterior inferotemporal cortex. Nature Neuroscience 7: 880–886.
Connor, C.E., Egeth, H.E. & Yantis, S. (2004) Visual attention: bottom-up vs. top-down. Current Biology 14: R850–R852.
Connor, C.E. (2003) Active vision and visual activation in area V4. Neuron 40: 1056–1058.
Connor, C.E. (2003) Shape dimensions and object primitives. In: The Visual Neurosciences, Chalupa, L. & Werner, J.S., eds., MIT Press, Cambridge MA.
Pasupathy, A. & Connor, C.E. (2002) Population coding of shape in area V4. Nature Neuroscience 5: 1332–1338.
Hinkle, D.A. & Connor, C.E. (2002) Three-dimensional orientation tuning in macaque area V4. Nature Neuroscience 5: 665–670.
Connor, C.E. (2002) Reconstructing a 3D world. Science 298: 376–377.
Connor, C.E. (2002) Representing whole objects: temporal neurons learn to play their parts. Nature Neuroscience 5: 1105–1106.
Pasupathy, A. & Connor, C.E. (2001) Shape representation in area V4: Position-specific tuning for boundary conformation. Journal of Neurophysiology 86: 2505–2519.
Hinkle, D.A. & Connor, C.E. (2001) Disparity tuning in macaque area V4. NeuroReport 12: 365–369.
Connor, C.E. (2001) Visual perception: sunny side up. Current Biology 11: R776–R778.
Connor, C.E. (2001) Shifting receptive fields. Neuron 29: 548–549.
Connor, C.E. (2000) Visual perception: monkeys see things our way. Current Biology 10: R836–R838.
Pasupathy, A. & Connor, C.E. (1999) Responses to contour features in macaque area V4. Journal of Neurophysiology 82: 2490–2502.