This paper discusses the use of ‘Pogany’, an affective anthropomorphic interface, for expressive music performance. For this purpose the interface is equipped with a module for gesture analysis: a) in a direct level, in order to conceptualize measures capable of driving continuous musical parameters, b) in an indirect level, in order to capture high-level information arising from ’meaningful’ gestures. The real-time recognition module for hand gestures and postures is based on Hidden Markov Models (HMMs). After an overview of the interface, we analyze the techniques used for gesture recognition and the decisions taken for mapping gestures with sound synthesis parameters. For the evaluation of the system as an interface for musical expression we made an experiment with real sub jects. The results of this experiment are presented and analyzed.