Serveur © IRCAM - CENTRE POMPIDOU 1996-2005. Tous droits réservés pour tous pays. All rights reserved. |
Journées d'informatique musicale, Juin 1997, Lyon (France)
Copyright © JIM 1997
As mentioned above, there already exist a wide variety of environments which have proven their usefulness, and which have their particular characteristics and user groups. We want to continue using these environments. Considering the implementation of a new environment for CAC or synthesis is beyond the scope of this project.
Every environment has some good tool or some feature that is not found in any other. Yet, there is rarely a way to make the two environments work together and benefit from both. For example, one environment can be gifted with a well designed breakpoint function editor. But it might be impossible to use the editor when working in a other environment. Making these environments communicate will extend the possibilities found in either of them.
One might wonder how control of sound synthesis is related to composition. The answer is closely. The composer Marco Stroppa, in one of our conversations, said that the control of sound synthesis is an act of composition, because a sound has a meaning only if it is imagined within a composition. Since control of sound synthesis is closely related to the use of timbre as a musical element in the composition [Lerdahl 1987], so should tools for the control of sound synthesis be intimately realted with CAC environments; to the extend that they cooperate closely, but not completely depend upon each other. PatchWork, for example, includes a library to prepare data for CSound. But the data structures used by PatchWork are conceived uniquely for CSound. Converting this library to use with an other synthesis kernel takes more then a hard day's work. It would require re-designing the library.
A last argument concerns all large, monolithic applications in general. Extending the application or replacing an existing functionality is, in most cases, impossible for the user. This means no replacement of the breakpoint function editor with a better one. No adding a new signal processing function to the synthesis kernel. The user is forced to work with the application as it was designed by the author, even if is desirable to add new features.
So what can we conclude from these observations? First, that we need to come up with a strategy allowing our tools to be used from within the available environments. We then take advantage of existing software and guarantee our tools a greater usability. Second, the architecture of our solution should be modular, and its interfaces public. Users/programmers should be able to add new tools or replace some of the existing modules of the environment. Last, we attempt to make the environment independent of the computer platform. This will urge us to design a clean architecture independent of platform specific features, therefore running less risk of our work becoming obsolete.
We have developed a framework, with the project name JavaMusic, that tries to fulfill these requirements. This framework can be viewed as a crossroads where different applications/modules meet and share data, and whose architecture allows the dynamic addition and removal of synthesis kernels, CAC environments and other tools.
We hope to achieve this goal with the introduction of several elements:
For the development of JavaMusic we have chosen the programming language Java. Java has the advantages of being a high level, dynamic language which is freely available, and widely used. The main reason for choosing Java, however, is its portability. Indeed, Java is an interpreted language and its specifications are independent of the local CPU. Furthermore, Java provides the classes which abstract machine dependent system components such as the file system, graphics, and networking.
In the next section we discuss the data structures, then, in section 3, we will describe the client-server architecture.
A Number represents a numerical value. Its interface defines two
methods, one of which returns its value as an integer, the other as a
floating point number.
{ public int intValue();
public float floatValue();
}
public interface Number
A Nuplet is an abstraction of an array of Numbers. Its interface is defined as follows:
{ public float value(int i);
public int dimension();
}
public interface Nuplet
The method dimension returns the length of the array. The method value returns the element with index i of the array. This structure can be used, for example, to stock the wave form of an oscillator.
A ControlSignal implements two methods:
{ public Nuplet value(float time);
public int dimension();
}
public interface ControlSignal
ControlSignals are data structures which have a time dimension. They will provide the necessary input values during the synthesis. The value of a ControlSignals, at any given time, is a Nuplet. Because the Nuplet has a fixed dimension larger or equal to zero, we can use ControlSignals as a multi-dimensional control structure; the Nuplet groups the value of every dimension into one object. Some examples of ControlSignals are the ConstantSignal and the BreakPointFunction.
public class Module{ String mValue;
byte mModuleType;
byte mInputType[];
byte mOutputType[];
}
A Module abstracts a function which takes a number of inputs, performs some calculation and outputs a number of results.
Connections will be used to link Modules and are directed from an output to an input.
{ Module mInModule;
int mInputNum;
Module mOutModule;
int mOutputNum;
}
public class Connection
A Patch consists of a set of Modules and a set of Connections:
{ Vector mModules;
Vector mConnections;
}
public class Patch
We want to present a formal description of a synthesis technique. We call such a description a VirtualInstrument [Battier1995]. We assume that synthesis techniques can be established using unit building blocks [Mathews 1963, Risset 1993]. This leads us to the definition of a VirtualInstrument as a Patch of Modules and Connections. Connections link Modules together to form a networked instrument.
A VirtualInstrument only describes a synthesis technique. The actual synthesis will be performed by a synthesis kernel. How the VirtualInstrument and its Modules will be converted to a set of signal processing functions within the kernel is briefly discussed in section 3.2.3.
We currently accept two basic types of Modules: param-Modules and mod-Modules. The mod-Module represents a primitive building block of the synthesis technique. When the VirtualInstrument is implemented by the synthesis kernel, the mod-Module is mapped onto one of the signal processing functions of the kernel. The value of the mod-Module indicates the name of this function.
The param-Module is used to hand over data to the synthesis kernel, both for the initialization and for the control of the synthesis. Its value is an index in an array of Parameters (see section 2.3.2).
A couple of remarks. First, a VirtualInstrument must satisfy a number of constraints. For example, some synthesis environments only manage acyclic structures or tree structures. We will thus need additional functions to test these conditions.
Second, she current definition of VirtualInstrument is well suited to describe synthesis techniques that model the analoge studio (signal models) [Di Poli 1993]. It can also represent physical models that use a waveguide description. Physical models that use a modal description, however, are not well represented in this formalism. Indeed, in these models a two-way interaction between the Modules is necessary and the connections are expressed in terms of accesses at at certain location on the Module. The description for this interaction between Modules is not possible without complicating the current one and we will leave the issue as is for now.
A third remark concerns the multidimensional ControlSignals. This concept is not new [Eckel et Gonzales-Arroyo 1994], but we would like to underline its usefulness again. Consider that we want to use a VirtualInstrument which uses additive synthesis. Using the unit generators currently found in most synthesis kernels, we can construct a VirtualInstrument that synthesizes 1 component using a sine wave oscillator which is controlled in frequency and in amplitude. If we now want to use additive synthesis using 2 components, we have to use two sine wave oscillators. With the initial description of the VirtualInstrument altered, we need a description for an additive synthesis regardless of the number of components. For this problem we propose the use of multidimensional ControlSignals. If the ControlSignals input to sine wave oscillator have a dimension larger than one we sum the resulting signals.
Lastly, the value of a mod-Module now depends on the synthesis kernel that will realize the VirtualInstrument. Most kernels, however, offer similar kinds of signal processing functions. For example, modules for adding two sound signals can be found in every kernel. If we can determine the functions common to most kernels, and associate one value to every group of similar functions we can construct VirtualInstruments independent of the underlying kernel. This project, which we have not started yet, will be of importance when we create our tools for the control of sound synthesis.
{ float mStart;
float mDuration;
Object mProcess;
}
public class SoundObject
A SoundObject is one element of the composition. It can represent a single note as well as a complex sound that evolves in time - in essence ``a single sound which might last several minutes with an inner life, and ... [has] the same function in the composition as an individual note once had.'' [Cott 1974]. SoundObjects can be seen as ``cells, with a birth, life and death'' [Grisey 1987].
The Process is a structure that determines the content of the SoundObject. It is the life, evolution, or force of a SoundObject. The Process can be one of two different kinds: it can be a SoundProcess or a Texture.
public class SoundProcess { VirtualInstrument mVirtualInstrument;Parameter mParameter[];
}
A SoundProcess is an object which represents a synthesis process, and which contains the recipe and the ingredients for this synthesis. A SoundProcess is the combination of a VirtualInstrument and an array of Parameters. The VirtualInstrument describes the synthesis algorithm. The Parameters serve as control structures for the synthesis, or as initialization values during the creation of the VirtualInstrument.
{ SoundObject mSoundObject[];
}
public class Texture
A Texture is a composed Process and contains a number of SoundObjects. The definitions of Textures and SoundObjects refer to each other: a SoundObject can refer to a Texture that itself can refer to a number of SoundObjects. However, we do not allow cyclic paths: a Texture cannot contain a SoundObject referring to the initial Texture. The composition can thus be organized in a tree structure [Rodet et Cointe 1984].
In the following sections, we will call the editor the client and the application the server.
A Resource has a name, a value, and an identification number. This identification number is unique such that at any given time there exists a bijection between the set of identification numbers and the set of Resources.
The Resources are structured hierarchically into a tree structure: a Resource can have any number of children, and one parent.
We have adopted the following solution. Clients can create a Resource which includes a reference to themselves. This Resource can have children describing the services the client assumes. We have defined a class Service for this description. This class holds the name of the service as well as the type of arguments needed and the type of result returned by the service.
We will call a client that settles itself as a Resource and publishes a number of Services a Provider. A client can now inspect the available Resources and search for the appropriate Provider and Service. To make use of a service, the client sends the Provider a Request. A Request is an object containing the name and arguments for the service. On completion the Provider returns the Request, including the result of the service, to the client.
This modular architecture provides the means of communication between different parts of an application without the need of knowing each other's interface before hand. New Providers and new Services can be added easily using the dynamic instantiating Java offers. What Providers are inserted into the environment, and what Services they implement is open to the user. The environment becomes a collection of specialized Providers, each dealing with one specific aspect of composition [Oppenheim 1996].
Providers which we need in particular are editors for Textures, SoundProcesses, and VirtualInstruments, CAC environments, and synthesis kernels. Some of these, such as the editor for Textures, we will need to create ourself. Others, such as the synthesis kernels, can be based on preexisting software. In the next paragraphs we comment on some of the currently existing Providers.
Communication between JavaMusic and Common Lisp is established over a TCP/IP connection, and enabled in both directions. Services in the JavaMusic environment can be called from within the Common Lisp environment, and clients, in their turn, can request the evaluation of a lisp expression.
[Assayag 96]
G. Assayag.
OpenMusic.
In Proceedings of the Int. Computer Music Conference, Hong
Kong, 1996. Int. Computer Music Association.
[Battier95]
M. Battier.
In Les Cahiers de l'Ircam: Instruments, Paris, France, 1995.
Editions Ircam - Centre Georges-Pompidou.
[Baird-Smith 1995]
A. Baird-Smith.
Distribution et Interpretation dans les Interfaces
Homme-Machine.
PhD thesis, Universite Paris VI, Paris, France, 1995.
[Cott 74]
J. Cott.
Stockhausen: Conversations with the Composer.
Picador (Pan Books Ltd.), London, 1974.
ISBN 0-33024165-6.
[Di Poli 1993]
G. De Poli.
Audio signal processing by computer.
In G. Haus, editor, Music Processing. Oxford University Press,
1993.
BN 0-19-816372-X.
[Eckel, Gonzales-Arroyo 1994]
G. Eckel and R. Gonzalez-Arroyo.
Musically salient control abstractions for sound synthesis.
In Proceedings of the Int. Computer Music Conference, Aarhus,
Danmark, 1994. Int. Computer Music Association.
[Garton 1995]
B. Garton.
The CMIX Home Page, 1995.
[Grisey 1987]
G. Grisey.
Tempus ex machina: A composer's reflection on musical time.
Contemporary Music Review, 2:239-275, 1987.
Harwood Academic Publishers.
[Lerdhal 1987]
F. Lerdahl.
Timbral hierarchies.
Contemporary Music Review, 2:135-160, 1987.
Harwood Academic Publishers.
[Morrison Adrien 1993]
J.D. Morrison and J.-M. Adrien.
Mosaic: A framework for modal synthesis.
Computer Music Journal, 17(1), Spring 1993.
MIT Press.
[Mathews 1963]
M.V. Mathew.
The digital computer as a musical instrument.
Science, 142:553-557, November 1963.
Am. Ass. for the Advancement of Science.
[McAdams 1994]
S McAdams.
Audition: physiologie, perception et cognition.
In Traite de psychologie experimentale 1. ed. Richelle,
Requin and Robert, Presses Universitaire de France, 1994.
[Miranda 1993]
E.R. Miranda.
From symbols to sound: Artificial intelligence investigation of sound
synthesis.
DAI Research Paper 640, Dept. of AI, University of Edinburgh, UK,
1993.
[Mathews, Moore et Risset 1974]
M.V. Mathew, F.R. Moore, and J.C Risset.
Computers and future music.
Science, 183:263-268, January 1974.
Am. Ass. for the Advancement of Science.
[McAdams, Winsberg, Donnadieu et De Soete 1995]
S. McAdams, S. Winsberg, S. Donnadieu, G. De Soete, and J. Krimphoff.
Perceptual scaling of synthesized musical timbres: Common dimensions,
specificities, and latent subject classes.
Psychol. Res., 58:117-192, 1995.
Springer-Verlag.
[Oppenheim 1996]
D. Oppenheim.
DMIX: A multi faceted environment for composing and performing
computer music.
Mathematics and Computers, 1996.
[Puc91]
M. Puckette.
Combining events and signal processing in the max graphical
programming environment.
Computer Music Journal, 15(3), Fall 1991.
MIT Press.
[Rodet Cointe 1984]
X. Rodet and P. Cointe.
FORMES: Composition and scheduling of processes.
In C. Roads, editor, The Music Machine, Cambridge
Massachusetts, 1984. MIT Press.
Version française
[Risset 1993]
J.-C. Risset.
Synthèse et matériau sonore.
In Les Cahiers de l'Ircam: La Synthèse Sonore, Paris,
France, 1993. Editions Ircam - Centre Georges-Pompidou.
[Rolland Pachet 1995]
P.Y. Rolland and F. Pachet.
A framework for representing knowledge about synthesizer programming.
Publication LAFORIA, LAFORIA-IBP, Univ. Paris 6, France, 1995.
[Scaletti 1989]
C. Scaletti.
The Kyma/Platypus computer music workstation.
Computer Music Journal, 13(2):23-38, Summer 1989.
MIT Press.
[Taube 1991]
H. Taube.
Common music: A music composition language in Common Lisp and
CLOS.
Computer Music Journal, 15(2), Summer 1991.
MIT Press.
[Vercoe 1986]
B. Vercoe.
Csound: A Manual for the Audio Processing System and
Supporting Programs with Tutorials.
Media Lab, MIT, 1986.
[Wessel 1979]
D. Wessel.
Timbre space as a musical control structure.
Computer Music Journal, 3(2):45-52, 1979.
MIT Press.
[Wessel 1992]
D. Wessel.
Connectionist models for real-time control of synthesis and
compositional algorithms.
In Proceedings of the Int. Computer Music Conference, San Jose,
California, 1992. Int. Computer Music Association.
____________________________ ____________________________
CMIX
Server © IRCAM-CGP, 1996-2008 - file updated on .
Serveur © IRCAM-CGP, 1996-2008 - document mis à jour le .