Corpus-based concatenative synthesis (CBCS) is a recent sound synthesis method that is based on descriptor analysis of any number of existing or live-recorded sounds and synthesis by selection of sound segments from the database matching sound characteristics given by the user. While the previous phase of research focused mainly on the analysis and synthesis methods, and handling of the corpus, current research now turns more and more towards how expert musicians, designers, the general public or children can interact efficiently and creatively with a corpus of sounds. We will first look at gestural control of the navigation through the sound space, how it can be controlled by audio input to transform sound in surprising ways, or to transcribe and re-orchestrate environmental sound. Then we will consider its use for live performance, especially in an improvisation setting between an instrumental and a CBCS performer, where recording the corpus live from the instrument creates a stronger and more direct coupling between them. Finally, we will discuss sound installations that open up discovery and interaction with rich sound corpora to the general public and children.