Ircam-Centre Pompidou

Recherche

  • Recherche simple
  • Recherche avancée

    Panier électronique

    Votre panier ne contient aucune notice

    Connexion à la base

  • Identification
    (Identifiez-vous pour accéder aux fonctions de mise à jour. Utilisez votre login-password de courrier électronique)

    Entrepôt OAI-PMH

  • Soumettre une requête

    Consulter la notice détailléeConsulter la notice détaillée
    Version complète en ligneVersion complète en ligne
    Version complète en ligne accessible uniquement depuis l'IrcamVersion complète en ligne accessible uniquement depuis l'Ircam
    Ajouter la notice au panierAjouter la notice au panier
    Retirer la notice du panierRetirer la notice du panier

  • English version
    (full translation not yet available)
  • Liste complète des articles

  • Consultation des notices


    Vue détaillée Vue Refer Vue Labintel Vue BibTeX  

    %0 Conference Proceedings
    %A Noisternig, Markus
    %A Katz, Brian F. G.
    %A D'Alessandro, Christophe
    %T Spatial rendering of audio-visual synthetic speech use for immersive environments
    %D 2008
    %B 155th ASA, 5th Forum Austicum, and 2nd ASA-EAA Joint Conference (Acoustics'08)
    %C Paris
    %V 123
    %P 3939-3939
    %F Noisternig08c
    %K Perception of voice and talker characteristics
    %K computer simulation of acoustics in enclosures
    %X Synthetic speech is usually delivered as a mono audio signal. In this pro ject, audio-visual speech synthesis is attributed to a virtual agent moving in a virtual 3-dimensional scene. More realistic acoustic rendering is achieved by taking into account the position of the agent in the scene, the acoustics of the room depicted in the scene, and the orientation of the virtual character’s head relative. 3D phoneme dependant radiation patterns have been measured for two speakers and a singer. These data are integrated into a Text-To-Speech system using a phoneme to directivity pattern transcription module which also includes a phoneme to viseme model for the agent. In addition to the effects related to agent’s head orientation for the direct sound, a room acoustics model allows for realistic rendering of the room effect as well as the apparent distance as depicted in the virtual scene. Real-time synthesis is implemented in a 3D audio rendering system.
    %1 7
    %2 3

    © Ircam - Centre Pompidou 2005.