Tamonontamo (2012) is a piece for amplified vocal quartet, choir of 24 singers and live electronics. The focus of this article is about the work made in collaboration with the Real-Time Music Interaction Team of Ircam. Augustin Muller, computer music programmer working with me at Ircam for this project, upgraded the CataRT software adding a Spat~ module to relate the corpus-based synthesis with the spatialization. In most of works the spatialization of sounds depends from the aesthetic choose of the composer. The Spat~ software allows the drawing in the space of linear movements of a source among a pre-selected number of loudspeakers with a user-friendly graphical interface. In this patch, we used Spat to spatialize the sounds in a non-linear way. The logic of spatialization depends on the pair of audio descriptors chosen in the set-up of the CataRT graphical display. In this sense, the aesthetic ideas of the composer refer not locally to each sound but on the choosing of the audio descriptors used for the x, y axis. A further Spat-related display has been implemented to split the space of analysis/playing from the space of diffusion of sounds. The implementation of the Unispring algorithm in the mnm.distribute external Max/MSP object allows the distribution of grains rescaling their position inside a pre-drawn sub-space. The interpolation between different shapes or the change of audio descriptors on the x, y axis for both displays can be programmed and made in real time. The storage of the synthesis as database (in a text file), allows the possibility to recall the analysis and recover the shapes drawn previously. Hiding and permanent deletion of some selected corpora in the database is also possible. The use of colour to work on the z axis is possible and is among the next step of this work. The upgrade of the use of Spat~ to an ambisonics system for diffusion of sounds represents the possibility to make—through the concatenative synthesis—a genuine work of 3D-sound-sculpting.