{"id":59856,"date":"2024-05-01T13:04:47","date_gmt":"2024-05-01T17:04:47","guid":{"rendered":"https:\/\/krieger.jhu.edu\/humanities-institute\/event\/bodian-seminar-hiroyuki-kato\/"},"modified":"2024-05-08T13:19:23","modified_gmt":"2024-05-08T17:19:23","slug":"bodian-seminar-hiroyuki-kato","status":"publish","type":"tribe_events","link":"https:\/\/krieger.jhu.edu\/humanities-institute\/event\/bodian-seminar-hiroyuki-kato\/","title":{"rendered":"Bodian Seminar: Hiroyuki Kato"},"content":{"rendered":"
\n

\n\t\t
\n\t\t\t\t\t<\/span><\/p>\n

\t\t\t\t\t
\n\t\t\t\t @ \t\t\t<\/span>
\n\t\t\t
\n\t\t\t\t\t\t\t<\/span><\/p>\n<\/h2>\n<\/div>\n

Hiroyuki Kato, Ph.D.
Associate Professor
Department of Psychiatry & Neuroscience Center
University of North Carolina Chapel Hill<\/p>\n

Sensory Integration along the Auditory Cortical Hierarchy<\/strong><\/strong><\/p>\n

Our brain\u2019s ability to parse overlapping sounds and reconstruct individual perceptual sound objects is essential in navigating acoustically complex environments. Despite ample evidence suggesting the critical roles of higher-order auditory cortices in integrating complex acoustic features, how they interact with primary areas for specialized computations remains unclear. Recently, using in vivo two-photon calcium imaging and unit recording in mice, we have reported a division of labor in sound feature extraction between two auditory cortical regions, A1 and A2. Specifically, A1 neurons preferentially encode temporal changes in frequencies (frequency modulations, or FMs), while A2 neurons function as spectral integrators of concurrent frequency stacks (Aponte et al., 2021; Kline et al., 2021, 2023). In this presentation, I will discuss our latest work to understand the interplay between A1 and A2, particularly in the face of combinations of sound features like stacks of FM tones, which are prevalent in vocalizations. Detailed quantification of A2 responses revealed their dependence on temporal coincidence, frequency co-modulation, and spectral proximity between two FM tones. These results align with the \u201ccommon fate\u201d and \u201cproximity\u201d Gestalt principles for perceptual integration, underscoring A2\u2019s critical role in the spectrotemporal binding of sound features. This finding opens new avenues for in-depth explorations into the neuronal interactions between A1 and A2 underlying this computation. Finally, I will introduce our ongoing research exploring the more distal inter-areal interactions between the auditory and frontal cortices, focusing on their role in predictive sensory coding.<\/p>\n

Faculty Host: Xiaoqin Wang<\/p>\n

<\/div>\n
\n
\n