In this paper, we aim at better understanding how human mental representations are structured in the specific case of the perception of urban soundscapes. This task is traditionally studied using questionnaires, surveys or categorization tasks followed by a lexical analysis. In contrast, we propose a new experimental approach to tackle this aim. In this approach, the subject is asked to manipulate sound events and textures within a dedicated computer environment in order to recreate two complex urban soundscapes, one ideal and the other not ideal. Subjects have access to a sound data set which has been designed and structured based upon perceptual considerations, and may alter the physical parameters of the selected sound samples. In order to achieve this, we use an audio-digital environment and a web audio interface for sound mining developed for the purposes of this study. The latter allows subjects to explore a sound database without resorting to text. By focusing on the auditory modality during the experimental process, this new paradigm potentially allows the subject to be better put in context and provides a more detailed description of the actual mental representations. In the light of the results presented in this paper, it seems that it also reduces the potential bias of only using verbalization during the experiment. After a presentation of our experimental protocol and the computer environment on which it depends, we will detail the results obtained during a pilot study achieved with ten subjects. We will stress the di erences between using this paradigm and one based on questionnaires.