Ircam-Centre Pompidou


  • Recherche simple
  • Recherche avancée

    Panier électronique

    Votre panier ne contient aucune notice

    Connexion à la base

  • Identification
    (Identifiez-vous pour accéder aux fonctions de mise à jour. Utilisez votre login-password de courrier électronique)

    Entrepôt OAI-PMH

  • Soumettre une requête

    Consulter la notice détailléeConsulter la notice détaillée
    Version complète en ligneVersion complète en ligne
    Version complète en ligne accessible uniquement depuis l'IrcamVersion complète en ligne accessible uniquement depuis l'Ircam
    Ajouter la notice au panierAjouter la notice au panier
    Retirer la notice du panierRetirer la notice du panier

  • English version
    (full translation not yet available)
  • Liste complète des articles

  • Consultation des notices

    Vue détaillée Vue Refer Vue Labintel Vue BibTeX  

    %0 Conference Proceedings
    %A Einbond, Aaron
    %A Trapani, Christopher
    %A Schwarz, Diemo
    %T Precise Pitch Control in Real Time Corpus-Based Concatenative Synthesis
    %D 2012
    %B International Computer Music Conference (ICMC)
    %C Ljubljana
    %F Einbond12a
    %K concatenative synthesis
    %K feature modulation synthesis
    %K CataRT
    %K bach
    %K Max/MSP
    %K microtonality
    %K audio mosaicing
    %X The need for fine-tuned microtonal pitch combined with the timbral richness of corpus-based concatenative synthesis has led to the development of a new tool for corpus-based pitch and loudness control in real time with CataRT. Drawing on recent research in feature modulation synthesis (FMS) as well as the bach library for MAX/MSP, we have implemented a set of new modules for CataRT that permit the user to define microtonal har- monies graphically and combine them with other audio descriptors to trigger concatenative synthesis in real or deferred time. Pitch information is generated from a pitch analysis or extracted from soundfile meta-data, and loudness may be controlled independently for different sound sets. Musical implementations already suggest promising results as well as future goals to generalize this approach to further timbral features for corpus-based FMS.
    %1 6
    %2 2

    © Ircam - Centre Pompidou 2005.