Résumé |
Although sound visualization and image sonification have been extensively used for scientific and artistic purposes, their combined effect is rarely considered. In this paper, we propose the use of an iterative visualization/sonification approach as a sound generation mechanism. In particular, we visualize sounds using a textural self-similarity representation, which is then analysed to generate control data for a granular synthesizer. Following an eco-systemic approach, the output of the synthesizer is then fed back into the system, thus generating a loop designed for the creation of novel time-evolving sounds. All the process is real-time, implemented in Max/MSP using the FTM library. A qualitative analysis of the approach is presented and complemented with a discussion about visualization and sonification issues in the context of sound design. |