Sound synthesis is generally focused on the reproduction of the spectral characteristics of the source or on the simulation of its physical behavior. Less attention is paid on the sound playback step which generally results in a simple diffusion on a conventional loudspeaker set-up. Stating the perceptual importance of a faithful reproduction of the source radiation properties, the paper presents a method combining a synthesis engine, based on physical modeling, with a rendering system allowing an accurate control on the produced sound-field. Two sound-field synthesis models are considered. In the first one, a local 3D array of transducers is controlled by signal processing for creating elementary directivity patterns that can be further combined in order to shape a more complex radiation. The dual approach consists in surrounding the audience with transducer arrays driven by Wave Field Synthesis in order to simulate the sound field associated to these elementary directivity patterns. In both cases, the different radiating modes of a given instrument are synthesized separately, in conjunction with their associated radiation pattern, and then superimposed in the spatial domain, i.e. during the propagation in air. This approach, referred as "Spatial Additive Synthesis" is illustrated, taking the example of different musical instruments.