Abstract
Many animals depend upon vocal and facial communication signals for survival and social interactions, but it remains unclear how voices and faces are integrated by the brain. Most studies have evaluated the unisensory processing of either vocal or facial information in brain regions thought to be voice or face sensitive. Other studies have described multisensory interactions in the brain for voices and faces, but only for a few brain regions, such as those close to the primary auditory cortex or in the prefrontal cortex. This work aims to address whether the responses of neurons in a voice-sensitive brain region, which was recently identified in monkeys with functional MRI, are influenced by faces. Extracellular recordings were conducted in two awake rhesus macaques. We targeted the anterior voice-sensitive cluster on the superior temporal plane, which was first localized for each animal with fMRI [please see the linked presentation] and resides ~5 mm anterior to the tonotopically organized field RT. For stimulation we used movies of vocalizing monkeys and humans that were matched in their low-level auditory and visual features. These dynamic face and voice stimuli were presented in auditory only, visual only or audio-visual stimulation conditions. Neuronal responses to the stimuli yielded a total of 318 local-field potential (LFP) sites and 208 single- and multi-units. Significant multisensory interactions were observed in 70% of the LFP sites and in 33% of the single- and multi-unit responses. We observed both suppression and enhancement of the neuronal