Ircam-Centre Pompidou

Recherche

  • Recherche simple
  • Recherche avancée

    Panier électronique

    Votre panier ne contient aucune notice

    Connexion à la base

  • Identification
    (Identifiez-vous pour accéder aux fonctions de mise à jour. Utilisez votre login-password de courrier électronique)

    Entrepôt OAI-PMH

  • Soumettre une requête

    Consulter la notice détailléeConsulter la notice détaillée
    Version complète en ligneVersion complète en ligne
    Version complète en ligne accessible uniquement depuis l'IrcamVersion complète en ligne accessible uniquement depuis l'Ircam
    Ajouter la notice au panierAjouter la notice au panier
    Retirer la notice du panierRetirer la notice du panier

  • English version
    (full translation not yet available)
  • Liste complète des articles

  • Consultation des notices


    Vue détaillée Vue Refer Vue Labintel Vue BibTeX  

    %0 Generic
    %A Machart, Pierre
    %T Morphological Segmentation
    %D 2009
    %I UPMC / IRCAM
    %F Machart09a
    %K segmentation
    %K temporal modeling
    %K segmental models
    %K morphology
    %K semi-supervised interactive learning
    %X Many applications and practices of working with recorded sounds are based on the segmentation and concatenation of fragments of audio streams. In collaborations with composers and sound artists we have observed that a recurrent musical event or sonic shape is often identified by the temporal evolution of the sound features. We would like to contribute to the development of a novel segmentation method based on the evolution of audio features that can be adapted to a given audio material in interaction with the user. In the first place, a prototype of a semi-supervised and interactive segmentation tool was implemented. With this prototype, the user provides a partial annotation of the stream he wants to segment. In an interactive loop, the system is able to build models of the morphological classes the user defines. These models will then be used to provide an exhaustive segmentation of the stream, generalizing the annotation of the user. This achievement relies on the use of Segmental Models, that have been adapted and implemented for sound streams represented by a set of audio descriptors (MFCC). The very novelty of this study is to use real data to build models of the morphological classes, issued from various audio materials. A singular method to build our global model is defined, using both learning paradigms and the integration of user knowledge. The global approach of this work is validated through experimentations with both synthesized streams and real-world materials (environmental sounds and music pieces). A qualitative and less formal validation also emerges from the feedback given by composers that worked with us along the whole internship.
    %1 8
    %2 1
    %U http://articles.ircam.fr/textes/Machart09a/

    © Ircam - Centre Pompidou 2005.