Résumé |
Combining auditory and visual information about the same external event enhances perception and behavioural performance. Numerous factors have been shown to contribute to the integration of visual and auditory stimuli, like spatial or semantic relationships between the two stimuli. We studied the influence of spatial disparity between the auditory and the visual stimuli on bimodal object recognition in a go/no-go task, under realistic virtual environment. Participants were asked to react as fast as possible to a target object, presented in the visual and/or the auditory modality, and to inhibit a distractor object. Reaction times were significantly shorter for semantically congruent bimodal stimuli than would be predicted by independent processing of information about the auditory and the visual targets. Moreover, reaction times were significantly shorter for semantically congruent bimodal stimuli (i.e. visual and auditory targets) than for semantically incongruent bimodal stimuli (i.e. target represented in only one sensory modality). Importantly, these results were not altered by a large spatial disparity between the auditory and the visual targets. Altogether our findings suggest that rules governing multisensory integration vary according to the purpose for which auditory and visual stimuli are combined. |