IRCAM - Centre PompidouServeur © IRCAM - CENTRE POMPIDOU 1996-2005.
Tous droits réservés pour tous pays. All rights reserved.

The Development of Digital Techniques : A Turning Point for Electronic Music ?

Jean-Claude Risset

Rapport Ircam 9/78, 1978
Copyright © Ircam - Centre Georges-Pompidou 1978, 1999

Electronic and computer music

Since World War II, the new music has been considerably influenced by the rapid progress of the recording technology. Concrete and electronic music have vastly increased the sound material at the disposal of the composer, and they have had already a large impact upon composing ideas (even for composers calling for traditional instruments). But one may notice that electronic music has developed somewhat like a separate branch of music. Certainly a number of composers use skillfuly, and with apparent satisfaction, the resources electronic music offers for their creative skills ; other composers call upon these resources occasionally. Yet, significantly enough, composers who had placed great hopes in these new sonic resources have soon been disillusioned. For instance Ligeti, after working hard at producing Artikulation, an electronic piece with quite rigorous specifications, quit electronic music which did not offer him the unbounded possibilities he expected.

If I may somewhat simplify the picture here, I think one may describe the limitations of concrete and electronic music in the following way. "Musique concrète" makes any recorded sound available for musical composition : it thus provides a considerable variety of natural sounds with complex structures -- but these sounds can only be transformed in ways that are rudimentary by comparison with the richness of the material ; this brings the danger of capitalizing on sound effects and privileging an aesthetics of collage. Electronic music, on the other hand, affords a precise control of the structure of electronic sounds -- of very simple and rather dull sounds, which can be enriched, but through manipulations which to a large extent ruin the control the composer can exert upon them. Of course, these two processes are often intertwined : natural and synthetic sounds can be mixed together, and live electronic music blends instrumental gestures with recorded or electronic sound material. However I think the dilemma between richness of sound and refinement of control remains even in these more complex situations.

I would here like to contend that one of the less explored and most promising avenues for music today is opened by the new possibilities to control sound structure, and that these possibilities can be extended mostly through digital synthesis and processing of sound, which permits to apply compositional control in the elaboration of the sound structure. It may look artificial to call for a tool -- the digital computer -- that has not been developed for musical purposes. But one should remember that the tape recorder or the electronic oscillator have been developed for purposes of recording or measurement, not for the creation of music : the digital computer is a much more general tool, which can function in many different ways depending upon its implementation and programming. In fact many benefits of the computer can be extended to more specific digital systems. The precision and flexibility of the computer make possible to realize sound structures which electronic music failed to achieve. The use of digital techniques in the realm of sound affords sound material of unprecedented ductility : it permits to get closer to a dream that has been worded in slightly different ways by Varèse, Stokowski, Cage, Berio : not only composing with sounds, but composing the sound themselves. Ligeti, who of course pursues this direction, is now very interested in the computer sonic resources, after having heard Chowning's computer experiments on illusory spaces.

Direct Digital Synthesis and its Problems

The most general process of sound synthesis is direct digital synthesis. In this method, developed by Mathews starting 1958, the computer, so to speak, directly controls the loudspeaker -- through a digital-to-analog converter. A single converter can produce a rich polyphonic texture, just as a single record groove can render an orchestral polyphony. The computer computes so-called samples of the waveform, i.e. it computes the values of the corresponding time function at closely and equally spaced time intervals. The process potentially permits to produce any waveform : direct digital synthesis is the most general sound synthesis process available.

To use it efficiently, however, several problems must be solved. First, one needs a convenient program to instruct the computer to compute the desired sounds. Programs like Music IV and Music V enable the user to produce a wide variety of sounds, even very complex ones, provided their physical structure is thoroughly specified. (A number of variants of these programs exist, like Music 4B, Music 7, Music 10, Music 11, Music 360). The program does not propose ready-made sound synthesis procedures, hut rather provides building blocks which the user may assemble as he wishes to specify such procedures. The.user may add new blocks and additional subroutines to tailor the program to his specific compositional needs.

Second, using such a program, the composer must know how to describe the sounds he wishes to generate in terms of these subroutines. By contrast, a composer for conventional orchestras knows the sounds of the instruments from long experience and training, and hence has little need to know how they work physically. But direct digital synthesis acutely raises this problem, which we call the psychoacoustic problem : providing an adequate physical description of interesting timbres.

Direct digital synthesis demands very much from the computer, which for each second of sound must put out tens of thousands of numbers computed according to the prescribed recipes. Hence it usually does not work in "real time", that is, it takes more than 1s to the computer to generate the samples corresponding to 1s of sound. So the physical description of the desired sounds must be provided in advance ; the user cannot hear the sound while he is varying parameters, manipulating knobs, as he can in electronic music : he has to resort to some psychoacoustic knowledge relating the physical parameters of a sound and its aural effect.

The first users of direct digital synthesis were immediately confronted with this fundamental problem. Even familiar sounds, such as those of traditional instruments, are not as easy to imitate as one might think : early attempts to imitate sounds using descriptions from classical acoustic treatises failed, pointing out the inadequacy of these descriptions and the need for more detailed and relevant data. One also lacked information about how to give to synthetic sounds liveliness, identity, personality. Hence the initial outcome of computer music was somewhat disappointing : the sounds produced were rather dull, and the new possibilities did not seem to live up to the expectations. Moreover the first installations were difficult to access. The musician had to find a way to be admitted in a large computer center, where music was often considered as a rather eccentric activity. Also the musician had to struggle somewhat with the conventions of the programming languages. This proved not to be too problematic : when the musician was motivated, he could in most cases cope very well with these conventions -- after all, it takes a well organized mind to understand and practice harmony, counterpoint or fugue. But, understandably enough, the delay in getting the sound back and the disappointing sound results did not contribute to motivate many musicians.

Realtime and Hybrid Sound Synthesis Systems

Real-time operation permits to "tune up" the parameters of the sound while listening, and to some extent dispenses the user with the need for psychoacoustic knowledge : so, at first glance, the way out of this apparent dead end seems to be the introduction of real-time in computer music. This was, I believe, the ambition of the Stockholm E.M.S. Computer Studio, when it was conceived more than ten years ago. To make real-time possible, it is necessary to relieve the computer from dealing with all the details of the sound. Instead of generating the sound, the computer can be given the task to control analog sound generating equipment -- if this equipment is stable enough, it can in principle be restored in the same state by just getting the same controlling numbers out of the computer. The control signals vary much more slowly than the wiggles of the sound waveshape, so the burden of the computer is much less, and even small computers can cope with the demands of real-time. Such systems have been developed, in several places, specially, in addition to Stockholm, in Toronto (Gabura and Ciamaga), London (Zinovieff and Grogorno), Murray Hill (Mathews and Moore). In the Murray Hill system, called Groove, special thought has been given to the question of which specifications should (or should not) be made in real-time. In particular, the system enables the user to exert over the sound output a control similar to that which a conductor can exert over an orchestra, hence to take advantage of aural feedback for introducing performance nuance in real-time -- this is not done through numbers, but through gestures captured through keyboards, knobs and joysticks. This possibility, as Boulez stresses, is essential if the electronic musical work is to be more than a mere film of acoustic events, if electronic music is to be variable rather than fixed, living rather than embalmed. The implications extend beyond the field of performance : the contributions of composition and performance could merge also the skill of composition could be applied to the preparation of a set of multiple musical possibilities, out of which a specific sonic realization could be selected at the time of listening only -- decided by the composer, by performers, or by listeners.

Real-time brings an immediacy that is hard to resist. Direct digital synthesis of sound has often been painfully slow, and did not produce much output. Moreover real-time possibilities can be helpful for trial and error, to tune by ear a parameter to its best musical value, and to acquire the required psychoacoustic know-how in a stimulating situation. However composers must beware of becoming addicted to real-time composition, which in fact means improvisation with only those resources that have been set ahead of the real-time operation. Systems which work only in real-time demand skill in instrument like improvisation and performance, unless they resort to automation but then, in contrast with the scarcity of output in direct digital synthesis, it becomes a problem to impose meaningful control over the miles of sound going by. For many, the interest of cascading sequences and of controlling random excursions of parameters is ephemeral. If serious compositional work is to be done on the sound structure, it should not be under the pressure of real-time performance. After all, composers do not have orchestras available to experiment with orchestration in near real-time.

I am playing the devil's advocate : clearly real-time operation has a lot to offer -- but it can also be a mixed blessing. Manufacturing the sounds through empirical manipulation should not keep the musician from doing more enduring sonological and compositional research. And the bride is not in all respects too beautiful. Analog equipment is inferior to digital equipment, and it imposes its limitations to the whole hybrid system. It is quite difficult to build very stable analog equipment, so reproductibility is problematic ; hybrid systems can require painful calibration. Moreover the power of analog equipment is limited : if one wants more than 24 voices (which can often be very useful musically), one needs more than 24 physical oscillators -- and this is already an expensive battery of oscillators.

Micro-electronics and Digital Synthesizers

So, despite their promises, real-time systems are still behind direct digital synthesis in generality and reproductibility : this makes them inappropriate for certain types of research or composition. Rather than the computer, the sound synthesizer is mainly responsible for these limitations. Fortunately, this situation is changing rapidly, due to the considerable progress of microelectronics and digital special-purpose circuitry. Digital synthesizers can now be built, which afford the precision and reproductibility of computer operation. Such synthesizers can also be much more powerful than analog synthesizers, in that a single physical digital oscillator can be time-shared and serve as a number of independant virtual oscillators. Such digital synthesizers relieve the computer of the burden of computing all the details of the sound, while keeping reproductibility and a good deal of generality. The first significant achievement in this field was the digital synthesizer developed at Dartmouth College by Alonso, Appleton and Jones, a quite portable system with its controlling computer, providing 16 independent oscillators, which can be hooked together to perform Chowning's frequency modulation. (This is a powerful technique to produce complex spectra through a special use of frequency modulation -- this technique can only be satisfactorily implemented with digital synthesis).

The field of digital synthesizers is rapidly and considerably expanding. At IRCAM, Di Giugno has built prototypes of powerful digital synthesizers, the so-called 4A and 4B machines (the 4B machine in collaboration with Halles of Bell Laboratories). These synthesizers are controlled by a minicomputer, so that various parameters of the sound can be either specified in advance or changed in real-time through the motion of keys, knobs, joysticks or lightpens. The synthesizers can even be controlled by a "LSI" (large scale integration) microcomputer, so that the whole system can be easily transportable. (In fact there is a plan to demonstrate the system in Stockholm at the Computer Music Meeting of May 1978). The so-called 4A synthesizer provides up to 256 oscillators in real-time -- which permits to produce rich textures or powerful chorus effects. Actually the computer has a hard time delivering data fast enough to control independently changes in amplitude or frequency for many of these voices : in the 4B machine, additional hardware has been designed to help the computer for amplitude and envelope control. (The 4B synthesizer has also richer connexion possibilities ; in particular, it can perform Chowning's frequency modulation).

These developments are extremely promising : they help bring together the generality and power of direct digital synthesis with appealing real-time possibilities. From the economic standpoint, one can envision that powerful music synthesis systems will soon be available for a low price, perhaps cheaper than a single 4-track professional tape recorder. So far it was only possible to use the computer for music in large -- and now not so large -- institutions : but this new economic situation will completely change the status of digital electronic music, by making digital systems much more accessible, to the extent that they will even be private tools for the independent composer.

In the long range, this expansion of the digital techniques in music will probably have far reaching consequences beyond the music professional scene. Mathews' experiments with Groove indicate that it is possible to develop digital systems which can be used in a variety of ways ranging from a record player situation (where the "performer" has minimal control) to an instrumental situation (where the performer has total control but is also submitted to considerable demands). Between these two extremes one may have many different types of control -- like the situation of the conductor, who does not produce all the notes, but who hopefully performs significant control. Such a system could offer genuine musical responsibilities to the user without necessarily demanding from him a professional technique. This is still utopia, but making such systems available to the public (which is already economically conceivable) might restore contemporary musical practice ; it would help fill the gap between the amateur instrumentalists, which cannot respond to the technical demands of contemporary music, and this music which they presently do not relate to their musical practice. In this utopia, the professional musicians could make proposals of pieces to be played as such, or to be completed or assembled in a variety of ways, and there would be a continuous gamut of degrees of initiative or responsibility which the listener-performer could take. Needless to say, the design and implementation of such systems will take considerable ingenuity and knowhow -- in fact it is dependent upon electronic, psychoacoutic and musical research.

Some results

It has been said that contemporary music has achieved more in terms of ideas than in terms of masterworks, and even though one may argue and cite pieces that can be indeed considered masterworks, there is some truth in it. (As Bennett has shown, this has also happened in the past, at certain stages of the history of music). Perhaps the same could be said of computer music. Although computer pieces are less numerous than ordinary electronic pieces, there is now a number of them ; most of them have probably little conceptual or musical values, as is the case for instrumental or electronic pieces. But some computer pieces significantly point at new directions -- either obviously, or insidiously ; sometimes, new domains are more clearly revealed through experiments than through pieces. I do not intend to do a critical review here : I rather shall try to indicate some processes implemented either in actual computer pieces or in computer experiments, and which appear to be of musical significance ; the following enumeration is certainly incomplete -- and influenced by my own biases.

Compositional procedures can be directly realized in sound. Tenney was probably the first to use the computer to make random selections of sound parameters within ranges prescribed by the composer : hence the score indicate only the outline, the computer fills in the details and yields the sonic result immediately ; with a different "seed", one will get different (quasi) random selections obeying the same constraints, hence pieces that are globally similar but completely different in details. (Similar processes have been used by Strang, Koenig -- and of course by Xenakis in instrumental pieces like ST 10).

Mathews has developed a number of compositional algorithms : some of these achieve intringuing transitions between two themes, including fascinating rhythmical transformations that could hardly be rendered properly through instrumental performance. Other algorithms, such as the multiplication of a theme by another one, point toward nesting processes. In my piece Mutations, I use chords of chords : a given proportion of frequencies is used to define a chord, but also to define the (in)harmonic content of the tones from which the chord is made up. In his piece Stria, Chowning uses more complex recursive, self-nested procedures. The previous examples seem to relate to composition rather than to sound manipulation : but they can intervene at the level of microcomposition as well. In fact any language for the description of sounds (e.g. Music V) privileges certain types of manipulation and suggests trying them first. Additional facilities can be added in user's front end languages, such as Smith's Score, which in particular provides nice ways to manipulate motives (repetitions, transpositions, inversions... ).Advanced musical input languages might bring an extension of the structural role of notation in music, which should interact in a profitable way with the new and highly ductile sonic resources.

The imitation of musical instruments has been an important step in musical psychoacoustic research : it has helped understand what made the instrumental sounds identifiable and interesting. Very realistic syntheses have been accomplished, like that of trumpet sounds by Morrill ; a most impressive synthesis of the singing voice has been performed by Sundberg in Stockholm. The work of Morrill, J, incidentally, has drawn upon the work of several people (specially Mathews, Chowning and myself), which indicates that the computer facilitates cooperation of researchers, even working thousands of miles -- or years -- apart. Imitating instruments shows that a timbre is often characterized by a law of variation, some specific correlation between several physical parameters, rather than by a fixed spectrum or some immediate invariant of that sort ; it also demonstrate the aural importance of "accidents", deviations from a too accurate Mathematic synthesis : such accidents can be computer-simulated, which indicates that computer synthesis is not necessarily cursed with ice-cold perfection.

Imitation of instruments is not only for research sake. For instance, there is interest for mixed works combining live performers and tape sounds -- on the account of visual interest of presentation, but also for good acoustical and musical reasons. Now mastering the computer synthesis of instrument-like sounds permits the composer to develop subtle relationships between the tape and the instruments : the synthetic and the instrumental sounds can be controlled with comparable refinement, even though the types of control used can be different. This intertwining of synthetic and instrumental sounds occurs in my piece Dialogues, and in Morrill's Studies for trumpet and computer -- where the computer sound extrapolates the trumpet out of its range. As mentioned below, the computer also permits interpolation between instrumental timbres.

Paradoxical effects can be obtained thanks to the precision and flexibility inherent in computer synthesis. Shepard produced a sequence of 12 tones in chromatic succession which seem to rise indefinitely in pitch when they are repeated. I extended this paradox and generated, e.g. everascending or descending glissandi, and sounds going down the scale and at the same time getting shriller. These paradoxes are not merely "truquages", artificial curiosities : they reflect the structure of our pitch judgments. Pitch appears to comprise a focalized aspect, related to pitch class, and a distributed aspect, related with spectrum, hence with timbre -- and the paradoxes are obtained by controlling independently the physical counterparts of these attributes, which are normally correlated. I have even manufactured a sound which goes down in pitch for most listeners when its frequencies are doubled i.e., when one doubles the speed of the tape recorder on which it is played this shows how misleading mere intuition can be in predicting the effect of simple transformations on unusual sounds. Similarly, Knowlton synthesized beats which apparently speed up for ever. I could extend this paradox, to produce, e.g., a sound going down the scale while it is getting shriller, and also slowing down its beat but at the same time increasing the number of beats per second. Needless to say, such oddities can be of musical relevance.

Chowning has been highly successful in producing quadriphonic sounds that give a powerful illusion of elaborate movements of sounds throughout space. This is done through careful control of auditory cues for angle, distance and speed -- a technique for projecting sounds into space more refined, and far more economical, than the use of an array of multiple speakers. The illusory space thus created is striking, as exemplified in pieces such as Chowning's Turenas and Rush's Traveling Music.

Digital synthesis permits to manufacture precisely controlled inharmonic tones. Sustained instrumental tones are usually harmonic -- apart from the ill-defined multiphonics of wind instruments. Now inharmonic components may fuse into tones which may have a fairly clear dominant pitch -- or they may instead split into fluid textures, if the temporal envelopes of the components are desynchronized (I used both instances in my piece Inharmonique). When fusion occurs, the interaction of the components of two (or more) such tones can give rise to privileged "consonant" intervals that are not the octave and the fifth : as indicated by Pierce in 1966 and exemplified by Chowning in his piece Stria, an intriguing relation exists between the inner structure of inharmonic sounds -- which can be arbitrarily composed and the melodic and harmonic relation between such sounds. Now with digital synthesis, one is free to compose such sounds as one desires : thus a wide and promising field is open for exploration.

Another exciting field is the digital processing of real sounds. This possibility has so far been restrained by its greediness in computer time and storage space (the latter difficulty vanishes if the processing can be performed in real-time, which at this point can still be a problem). The challenge here will be gaining mastery of highly complex sounds, controlling them in subtle, "internal" ways. This should often be attainable through refined analysis and resynthesis technique, already exemplified by vocoders and predictive coding. Although much of this field is still in the future, Moorer has realized, through digital recording and processing, interesting interactions between flute and speech sounds -- almost a talking flute ; Decoust, in his piece Interphone, has used time functions extracted from a recorded voice to control in a supple fashion the synthetic sounds ; Dodge has produced several musical compositions using speech synthesis programs.

In fact, digital synthesis and processing open to the composer unlimited timbral resources, to the extent that the composer is often disoriented when he faces this unbounded horizon. As Wessel puts it, "a timbre space that adequately represented the perceptual dissimilarities could conceivably serve as a kind of map that would provide navigational advice to the composer interested in structuring aspects of timbre". One can indeed propose geometric models of subjective timbral space, such that individual sounds are represented as points in this space : sounds judged very dissimilar are distant, and sounds judged similar are close. The models are not constructed arbitrarily, but elaborated by computer programs applying the so-called multidimensional scaling technique to the subjective judgments of similarities provided by the listeners. The concept of a timbral map is important : Grey and Wessel have already shown that such models can indeed help explore timbral space and make predictions about the perception of timbral relations. Grey has been able to perform fascinating interpolations between instrumental timbres, e.g. between violin and oboe : one can imagine the games of timbre metamorphis which this makes possible for the composer using instrument-like computer sounds. Incidentally, trajectories through timbral space are facilitated by synthesis techniques like Chowning's frequency modulation, whereby only a few significant parameters suffice to control the timbre. Wessel and his collaborators have shown that timbral maps permit to predict which transition between two different timbres will be judged to provide the best analogy with a given timbral transition : this seems a little intricate, but what is involved there is what corresponds to melodic transposition in Klangfarbenmelodies. Wessel has also shown that timbral maps help to predict when rhythms determined by timbral recurrence take precedence over conflicting rhythms determined by melodic recurrence.

The role of Research

It should be clear from the previous enumeration that even the advent of digital real-time possibilities does not solve all the problems : it does not dispense us with the need for research. Digital sound synthesis has initially grown rather slowly : it has gone a long way thanks to research done in a few centers, specially Bell Laboratories and Stanford University.

We are now at a turning point where the musical resources of digital sound processing reach a useful critical mass, and where microelectronics can make these resources available to a much larger community of users. However, to take full advantage of the new possibilities open by the computer for sound production and organization, one still needs a significant amount of research -- not only the personal research of the individual composers, but a widespread effort to continue to conquer these possibilities and to understand them better. One must undertake a fundamental reconsideration of the relationship between material and composition. One must understand more deeply the actual virtualities of this seemingly neutral sonic material, which may in fact ultimately be constrained through perceptual and cognitive human characteristics. One must explore the relations of this intrinsic ductility to the possibilities of human control -- through notational systems as well as through body gestures. One must reconsider the conditions of presentation -- and even of production -- of music. One cannot spare a fundamental reconsideration of inherited music theories and practices ; it is essential to separate in previous music systems what is contingent and what is universal by revealing its explicit and implicit foundations within the realm of sound production, organization and perception, if one wants to fully assume the implications of the changes occured in this realm.

This is an enormously ambitious program of research. In fact the above-mentioned reconsideration has begun already, it is being pursued in many places with many different points of view. The "Institut de Recherche et de Coordination Acoustique/Musique" (I.R.C.A.M.) has recently begun its activity in Paris under the direction of Pierre Boulez : it can play an important part here. Certainly IRCAM is not concerned only with digital processing of sound, as the names of its five departments indicate* : but it is clear that the activities implied by these names relate to the program sketched above. IRCAM is basically a research institute, not an institute for the production of music. Hopefully, a lot of music will come out of the research : yet Boulez has made it clear that, while research at IRCAM should keep in close contact with the creators and even with the public, it should not be constrained by the pressure of production needs. Similarly, in this stage of reexamination of music theories and practices, it is vital not to be burdened with large scale teaching responsibilities -- even though it is hoped that the work will eventually influence strongly musical curricula and pedagogical techniques.

Other institutions have different orientations, but their contributions to bringing about what may be a new era of music is also needed. In Stockholm, for example, while important basic research in Musical Acoustics has been accomplished at the Royal Institute of Technology, the E.M.S. electronic studio has raised considerable interest. E.M.S. equipment is impressive, and minor additions would bring it up-to-date with the powerful new possibilities mentioned above. E.M.S. is more oriented than IRCAM toward production of music, and this is certainly an essential and complementary aspect. It is hoped that many composers will engage in research and contribute to design their future tools -- but they should of course not stop composing The keen musician's ear, his motivation, his ideas are invaluable and his struggle with the new means is essential to shape them, if the research under way is to fulfill the dictates of musical necessity and imagination.

Music is not created in the vacuum. Here I have focused on a particular technology : but more generally the social context permeates every aspect of music. In particular it influences possible situations for musical research, creation and diffusion -- but in a dialectic way. At present, most musical syntactic acquisitions are put in question ; the new media have completely changed the conditions in which most people listen to music ; and the digital technology offers new and powerful potential. It is an important challenge to help avoid misusing this potential and to channel it for music's sake : it may be an historical opportunity for music to redefine itself and take an active part in shaping its own future.


Instruments and Voice : Vinko Globokar ; electronic : Luciano Berio, Computer ; diagonal : Gerald Bennett ; pedagogy : Michel Decoust. Max Mathews is scientific advisor, Nicholas Snowman, artistic manager and Brigitte Marger, public relations manager. It might be helpful to specify that the diagonal department will coordinate the work of other departments and reexamine them with a critical and theoretical concern. Also, there will not be any large scale educational activity, but rather research on new pedagogical possibilities, including the new kind of training that should best prepare the musicians to use the new tools which science and technology will make available.

Server © IRCAM-CGP, 1996-2008 - file updated on .

Serveur © IRCAM-CGP, 1996-2008 - document mis à jour le .