Ircam-Centre Pompidou


  • Recherche simple
  • Recherche avancée

    Panier électronique

    Votre panier ne contient aucune notice

    Connexion à la base

  • Identification
    (Identifiez-vous pour accéder aux fonctions de mise à jour. Utilisez votre login-password de courrier électronique)

    Entrepôt OAI-PMH

  • Soumettre une requête

    Consulter la notice détailléeConsulter la notice détaillée
    Version complète en ligneVersion complète en ligne
    Version complète en ligne accessible uniquement depuis l'IrcamVersion complète en ligne accessible uniquement depuis l'Ircam
    Ajouter la notice au panierAjouter la notice au panier
    Retirer la notice du panierRetirer la notice du panier

  • English version
    (full translation not yet available)
  • Liste complète des articles

  • Consultation des notices

    Vue détaillée Vue Refer Vue Labintel Vue BibTeX  

    %0 Conference Proceedings
    %A Schwarz, Diemo
    %A Schnell, Norbert
    %T Sound Search by Content-Based Navigation in Large Databases
    %D 2009
    %B Sound and Music Computing (SMC)
    %C Porto
    %F Schwarz09b
    %K corpus-based synthesis
    %K content-based retrieval
    %K search algorithms
    %K dimensionality reduction
    %K visualisation
    %K databases
    %K interaction
    %K navigation
    %X We propose to apply the principle of interactive real-time corpus-based concatenative synthesis to search in effects or instrument sound databases, which becomes content-based navigation in a space of descriptors and categories. This surpasses existing approaches of presenting the sound database first in a hierarchy given by metadata, and then letting the user listen to the remaining list of responses. It is based on three scalable algorithms and novel concepts for efficient visualisation and interaction: Fast similarity-based search by a kD-Tree in the high-dimensional descriptor space, a mass--spring model for layout, efficient dimensionality reduction for visualisation by hybrid multi-dimensional scaling, and novel modes for interaction in a 2D representation of the descriptor space such as filtering, tiling, and fluent navigation by zoom and pan, supported by an efficient 3-tier visualisation architecture. The algorithms are implemented and tested as C-libraries and Max/MSP externals within a prototype sound exploration application.
    %1 6
    %2 3

    © Ircam - Centre Pompidou 2005.