{"id":177,"date":"2017-03-27T14:14:19","date_gmt":"2017-03-27T18:14:19","guid":{"rendered":"http:\/\/krieger.jhu.edu\/mind-brain\/?post_type=people&p=177"},"modified":"2025-08-28T10:33:31","modified_gmt":"2025-08-28T14:33:31","slug":"ed-connor","status":"publish","type":"people","link":"https:\/\/krieger.jhu.edu\/mbi\/directory\/ed-connor\/","title":{"rendered":"Ed Connor"},"featured_media":179,"template":"","role":[61],"filter":[],"class_list":["post-177","people","type-people","status-publish","has-post-thumbnail","hentry","role-faculty"],"acf":[],"post_meta_fields":{"_edit_lock":["1756391675:729"],"_edit_last":["729"],"ecpt_people_alpha":["Connor"],"ecpt_position":["Professor of Neuroscience
Director, Zanvyl Krieger Mind\/Brain Institute"],"ecpt_expertise":["Shape Processing in Higher Level Visual Cortex"],"ecpt_email":["Connor@jhu.edu"],"ecpt_office":["371 Krieger Hall"],"ecpt_research":["
Object And Scene Vision:\u00a0\u00a0Population Coding, Algorithms, Deep Networks, Prosthetics<\/strong><\/p>\r\n Vision is your superpower. At a glance, you can tell where you are, what is around you, what just happened, and what is about to happen.\u00a0 You effortlessly perceive the precise 3D structure of things in your environment, at distances ranging from millimeters to miles.\u00a0 You know what things are called, how valuable they are, how old or new, fresh or rotten, strong or weak.\u00a0 You intuit their material, mechanical, and energetic properties, allowing you to anticipate and alter physical events.\u00a0 You effectively read the minds of other humans and animals based on tiny variations in facial configuration and body pose.\u00a0 A picture is worth many times a thousand words to you.<\/p>\r\n All this information seems to exist outside you, immediately and effortlessly available.\u00a0 Understanding what you see seems trivial\u2014you only have to look<\/em> at it!\u00a0 We are so good at vision that we tend not to recognize it as an ability or a process of any kind.\u00a0 But in fact it is one of the most difficult things the brain does.\u00a0 Computer vision, even with deep networks, has not even begun to approach the kind of visual understanding that comes so easily to us.\u00a0 Computers can beat us at math, chess, go, and jeopardy, but they cannot understand the visual world the way we do.<\/p>\r\n Our visual appreciation of the world emerges from networks of billions of neurons in the ventral visual pathway of the brain.\u00a0 Our lab studies neural information processing in the intermediate and higher level stages of this pathway.\u00a0 We want to understand how the ventral pathway transforms images into knowledge about the world.\u00a0 Images are just 2D arrays of numerical values corresponding to pigment intensities, pixel colors, or photoreceptor activations in the eye.\u00a0 You could not deduce what is in an image from the numerical values themselves, but that is effectively what the brain must do with neural signals coming from the eyes.\u00a0 If we could decipher how the brain does this on the algorithmic level, we could use the same principles to build computer vision systems with human-like capabilities.\u00a0 We could develop prosthetic interfaces for blind patients that hijack the mechanisms of the ventral pathway to induce vivid visual experiences.\u00a0 And, we would understand the substrate for our rich, detailed, aesthetic experiences of the visual world.<\/p>"],"ecpt_publications":[" Sasikumar, D., Emeric, E., Stuphorn, V., & Connor, C.E. (2018). First-pass processing of value cues in the ventral visual pathway<\/a>. Current Biology <\/em>28<\/strong>: 538\u2013548.<\/p>\r\n Connor, C.E., & Knierim, J.J. (2017). Integration of objects and space in perception and memory<\/a>. Nature Neuroscience<\/em> 20<\/strong>: 1493\u20131503.<\/p>\r\n Vaziri, S., & Connor, C. E. (2016). Representation of Gravity-Aligned Scene Structure in Ventral Pathway Visual Cortex<\/a>. Current Biology<\/em> 26<\/strong>: 766-774.<\/p>\r\n Connor, C. E., & Stuphorn, V. (2015). The Decision Path Not Taken<\/a>. Neuron<\/em> 87<\/strong>: 1128\u20131130.<\/p>\r\n Vaziri, S., Carlson, E.T., Wang, Z., & Connor, C.E. (2014). A channel for 3D environmental shape in anterior inferotemporal cortex<\/a>. Neuron <\/em>84<\/strong>: 55\u201362. PMCID: PMC4247160.<\/p>\r\n Connor, C. E. (2014). Cortical geography is destiny<\/a>. Nature Neuroscience<\/em> 17<\/strong>: 1631\u20131632.<\/p>\r\n Yau, J.M., Connor, C.E., & Hsiao, S.S. (2013). Representation of tactile curvature in macaque somatosensory area 2. Journal of Neurophysiology<\/em> 109<\/strong>: 2999\u00ad\u20133012.<\/p>\r\n Hung, C.-C., Carlson, E.T., Connor, C.E. (2012) Medial axis shape coding in macaque inferotemporal cortex<\/a>. Neuron<\/em> 74<\/strong>: 1099\u20131113.<\/p>\r\n Yau, J.M., Pasupathy, A., Brincat, S.L., Connor C.E. (2012) Curvature processing dynamics in macaque area V4. Cerebral Cortex <\/em>23<\/strong>: 198-209.<\/p>\r\n Roe, A.W., Chelazzi, L., Connor, C.E., Conway, B.R., Fujita, I., Gallant, J.L., Lu, H., Vanduffel, W. (2012) Toward a unified theory of visual area V4<\/a>. Neuron<\/em> 74<\/strong>: 12\u201329.<\/p>\r\n Carlson, E.T., Rasquinha, R.J., Zhang, K., Connor, C.E. (2011) A sparse object coding scheme in area V4<\/a>. Current Biology<\/em> 21<\/strong>: 288\u2013293.<\/p>\r\n Kourtzi, Z. & Connor, C.E. (2011) Neural representations for object perception: structure, category, and adaptive coding<\/a>.\u00a0 In: Annual Review of Neuroscience<\/em> 34<\/strong>: 45\u201367.<\/p>\r\n Connor, C.E. (2010) A new viewpoint on faces<\/a>. Science<\/em> 330<\/strong>: 764\u2013765.<\/p>\r\n Yau, J.M., Pasupathy, A., Fitzgerald, P.J., Hsiao, S.S. & Connor, C.E. (2009) Analogous intermediate shape coding in vision and touch<\/a>. PNAS <\/em>106<\/strong>: 16457\u201316462.<\/p>\r\n Connor, C.E., Pasupathy, A., Brincat, S. & Yamane Y. (2009) Neural transformation of object information by ventral pathway visual cortex.\u00a0 In: The Cognitive Neurosciences IV<\/em>, Gazzaniga, M.S., ed., MIT Press, Cambridge MA.<\/p>\r\n Yamane, Y., Carlson, E.T., Bowman, K.C., Wang, Z. & Connor, C.E. (2008) A neural code for three-dimensional object shape in macaque inferotemporal cortex<\/a>. Nature Neuroscience<\/em> 11<\/strong>: 1352\u20131360.<\/p>\r\n Vaziri, S.L., Pasupathy, A., Brincat, S.L. & Connor, C.E. (2008) Structural representation of object shape in the brain.\u00a0 In: Object Categorization: Computer and Human Vision Perspectives<\/em>, Cambridge University Press.<\/p>\r\n Connor, C.E. (2008) Visual object representation.\u00a0 In: Encyclopedia of Neuroscience<\/em>, Binder, M.D., Hirokawa, N. & Windhorst, U., eds., Springer-Verlag, Heidelberg.<\/p>\r\n Cadieu, C., Kouh, M., Pasupathy, A., Connor, C.E., Riesenhuber, M. & Poggio, T. (2007) A model of V4 shape selectivity and invariance. Journal of Neurophysiology <\/em>98<\/strong>: 1733\u20131750.<\/p>\r\n Connor, C.E. (2007) Transformation of shape information in the ventral pathway<\/a>. Current Opinion in Neurobiology<\/em> 17<\/strong>: 140\u2013147.<\/p>\r\n Brincat, S.L. & Connor, C.E. (2006) Dynamic shape synthesis in posterior inferotemporal cortex<\/a>. Neuron<\/em> 49<\/strong>: 17\u201324.<\/p>\r\n Connor, C.E. (2006) Attention: beyond neural response increases<\/a>. Nature Neuroscience<\/em> 9<\/strong>: 1083\u20131084.<\/p>\r\n Hinkle, D.A. & Connor, C.E. (2005) Quantitative characterization of disparity tuning in ventral pathway area V4. Journal of Neurophysiology<\/em> 94<\/strong>: 2726\u20132737.<\/p>\r\n Connor, C.E. (2005) Friends and grandmothers<\/a>. Nature<\/em> 435<\/strong>: 1036\u20131037.<\/p>\r\n Brincat, S.L. & Connor, C.E. (2004) Underlying principles of visual shape selectivity in posterior inferotemporal cortex<\/a>. Nature Neuroscience<\/em> 7<\/strong>: 880\u2013886.<\/p>\r\n Connor, C.E., Egeth, H.E. & Yantis, S. (2004) Visual attention: bottom-up vs. top-down<\/a>. Current Biology<\/em> 14<\/strong>: R850\u2013R852.<\/p>\r\n Connor, C.E. (2003) Active vision and visual activation in area V4<\/a>. Neuron <\/em>40<\/strong>: 1056\u20131058.<\/p>\r\n Connor, C.E. (2003) Shape dimensions and object primitives.\u00a0 In: The Visual Neurosciences<\/em>, Chalupa, L. & Werner, J.S., eds., MIT Press, Cambridge MA.<\/p>\r\n Pasupathy, A. & Connor, C.E. (2002) Population coding of shape in area V4<\/a>. Nature Neuroscience<\/em> 5<\/strong>: 1332\u20131338.<\/p>\r\n Hinkle, D.A. & Connor, C.E. (2002) Three-dimensional orientation tuning in macaque area V4<\/a>. Nature Neuroscience<\/em> 5<\/strong>: 665\u2013670.<\/p>\r\n Connor, C.E. (2002) Reconstructing a 3D world<\/a>. Science<\/em> 298<\/strong>: 376\u2013377.<\/p>\r\n Connor, C.E. (2002) Representing whole objects: temporal neurons learn to play their parts<\/a>. Nature Neuroscience<\/em> 5<\/strong>: 1105\u20131106.<\/p>\r\n Pasupathy, A. & Connor, C.E. (2001) Shape representation in area V4: Position-specific tuning for boundary conformation. Journal of Neurophysiology<\/em> 86<\/strong>: 2505\u20132519.<\/p>\r\n Hinkle, D.A. & Connor, C.E. (2001) Disparity tuning in macaque area V4. NeuroReport<\/em> 12<\/strong>: 365\u2013369.<\/p>\r\n Connor, C.E. (2001) Visual perception: sunny side up<\/a>. Current Biology<\/em> 11<\/strong>: R776\u2013R778.<\/p>\r\n\r\n