Recherche
Recherche simple
Recherche avancée
Panier électronique
Votre panier ne contient aucune notice
Connexion à la base
Identification
(Identifiez-vous pour accéder aux fonctions de mise à jour. Utilisez votre login-password de courrier électronique)
Entrepôt OAI-PMH
Soumettre une requête
| Consulter la notice détaillée |
| Version complète en ligne |
| Version complète en ligne accessible uniquement depuis l'Ircam |
| Ajouter la notice au panier |
| Retirer la notice du panier |
English version
(full translation not yet available)
Liste complète des articles
|
Consultation des notices
%0 Book Section
%A Schwarz, Diemo
%A Caramiaux, Baptiste
%T Interactive Sound Texture Synthesis Through Semi-Automatic User Annotations
%D 2014
%E Aramaki, M., Derrien, O., Kronland-Martinet, R., Ystad, S.
%B Sound, Music, and Motion : Lecture Notes in Computer Science, Vol. 8905
%C Marseille
%I Springer International Publishing
%P 372-392
%F Schwarz14a
%K sound textures
%K audio descriptors
%K corpus-based synthesis
%K canonical correlation analysis
%K canonical time warping
%X We present a way to make environmental recordings controllable again by the use of continuous annotations of the high-level semantic parameter one wishes to control, e.g. wind strength or crowd excitation level. A partial annotation can be propagated to cover the entire recording via cross-modal analysis between gesture and sound by canonical time warping (CTW). The annotations serve as a descriptor for lookup in corpus-based concatenative synthesis in order to invert the sound/annotation relationship. The workflow has been evaluated by a preliminary subject test and results on canonical correlation analysis (CCA) show high consistency between annotations and a small set of audio descriptors being well correlated with them. An experiment of the propagation of annotations shows the superior performance of CTW over CCA with as little as 20 s of annotated material.
%1 4
%2 3
%U http://architexte.ircam.fr/textes/Schwarz14a/
|
|