Résumé |
Future generation of audio content will promote immersion and presence, both notions strongly related to the spatial attributes of the auditory scene. Presence is particularly stimulated in active situations where the listener may navigate in the soundscape or interact with the sound objects. In such situations, enabled by WFS or binaural rendering, the continuous update and congruence of the acoustical cues, depending on listener's actions, have a strong impact on presence sensation. Among these acoustical cues, the paper focuses on directional properties of sound source radiation. The paper first reviews the perceptual and cognitive aspects involved by the radiation properties of sound sources and illustrates them in the context of future audio applications. These applications range from the design of various audio devices or active material with controllable radiation patterns, to the sound field reproduction with accurate rendering of directivity properties. In the last example, the stake is to render convincing 3D sound objects instead of 2D sound pictures, allowing the listener to experience coherent direct sound contributions and room effect while wandering around the soundscape. The paper describes different methods dedicated to the reproduction of sound source directivity. They are based on a physical modeling of the radiation properties and use array signal processing. A first approach is studied where a 3D array of transducers is located on stage and is controlled in order to radiate sound in a way equivalent to the simulated source. A dual approach, where the transducer array is distributed around the audience uses WFS in order to create a sound field similar to the one that would have been generated by the virtual source. These two approaches are compared, in their respective application domains, in terms of reproduction accuracy of the direct sound and associated room effect, joined recording technique, practical implementation and flexibility. |