This paper shows methods for defining and rendering Sound Level of Detail (SLOD) for audiographic scenes using corpus-based granular synthesis. We introduce three levels of detail for sound (individual events, statistical texture, background din), to define a proximity profile around the listener. The smooth transition between the levels is assured alternatively by statistical modeling or audio impostors. The activation of the three levels is controlled by invisible and editable profile objects mapped to presets of audio process parameters. These also serve to balance the CPU load between the different audio processes. We have tested this method on various virtual scenes such as crowds, rain, foliage, or traffic.