1. SYSTEMIC THINKING IN ELECTRONIC MUSIC
The idea of composition as a system has long been present in musical thought, and the artistic and scientific revolutions of the 20th century introduced a more precise notion of systemic thinking. With the advent of cybernetics and complexity science, feedback systems gained significant attention in music, taking different forms depending on how composers conceptualized musical structures and the organization of sound. Cybernetic approaches, in particular, became deeply intertwined with early electronic music: Here, the system is defined by formalized rules yet conceived as an interactive model generating emergent relationships within the sonic material.
In Europe, Werner Meyer-Eppler, Robert Beyer, and Herbert Eimert, key participants in the establishment of the WDR Electronic Music Studio in Cologne, were receptive to emerging scientific ideas in systems theory and organizational theory. Their approach to music theory emphasized treating sound as a structured set of parameters (total serialism) rather than following conventional compositional methods, investigating how musical form could emerge from the interactions and relationships between these parameters. The Cologne Studio became quickly the most influential in the world during the 1950s and 1960s, hosting many of the most important contemporary composers such as Franco Evangelisti, Karlheinz Stockhausen, György Ligeti, Roland Kayn, Herbert Brün, and Cornelius Cardew.
Electronic music encompassed the composition of sound itself and the exploration of its internal relationships. At the Cologne Studio, Karlheinz Stockhausen and Franco Evangelisti developed methods in which musical form arose naturally from the properties of the generated sound materials. Their pioneering works, Studie II (1954) and Incontri di Fasce Sonore (1956), are central for their early exploration of music as a system, where form emerges from the relationships and interactions among sound parameters.
In this context, the work of composer Roland Kayn is particularly significant. His cybernetic music project received its initial impulse in 1953 when, as a young student, he came into contact with the philosopher Max Bense 1. Immediately after that first encounter with Bense, Kayn met Herbert Eimert at the Cologne studio. Kayn was fascinated by the sonic possibilities of the new technologies, but considered the serialist aesthetic dominant at the studio at the time to be too restrictive. After this experience, over the next decade he focused mainly on instrumental composition and the formal application of cybernetic theories (Hernandez 2017).
In 1964, Franco Evengelisti, also inspired by the theories of cybernetics, established the Gruppo di Improvvisazione Nuova Consonanza (GINC), a collective of musicians oriented towards timbral and formal experimentation. In the same year Roland Kayn moved to Rome and joined the GINC. Kayn began to experiment intensively with self-regulated cybernetic systems based on feedback loops, no longer simply as formal models for instrumental compositions but also as signal-generating networks of analogue devices. Kayn left in 1968, dissatisfied with the group’s reliance upon clichés and inability to properly integrate cybernetic concepts into their improvisatory framework (Hernandez 2017).
In 1968, Walter Branchi joined the GINC, and the following year he composed Thin for cymbal and amplified tam-tam. In this work the system was no longer an abstract model but derived directly from the behavior of the instrument itself, extending Stockhausen’s ideas of live electronic music, as developed with Mikrophonie I (1964).
In Branchi's notion of sistema sonoro, for the first time, composition is no longer a structure imposed from the outside but arises from the nature of the system that produces it 2. In Thin, each sound manifests the physical system with its specific spectral and morphological characteristics. The notation itself is not conceived a priori but emerges from the analysis of the instrument’s behavior. The cymbal and the tam-tam, with their spectral peculiarities determine the rules of musical writing creating a conceptual feedback loop between the score and the properties of the instruments themselves.
It is important to note that in those same years, a similar conception of the instrument-space relationship can also be found in the first self-built instruments of David Tudor and Hugh Davies, where the system’s internal behavior is the work itself, shaping both the form of the piece and the sonic outcome. This approach could be understood in terms of musical instruments as epistemic tools (Magnusson 2019): instruments and technologies are not merely sound-producing tools but embody knowledge and ways of thinking about musical processes. The acoustical instruments of Thin themselves function in this manner, structuring compositional decisions and shaping the emergent musical outcome.
In Thin, Walter Branchi conceives the spatial setup as an extension of the instruments themselves. In this excerpt from the score, the correspondence between microphones and loudspeakers functions as a conceptual mapping: the diffusion points mirror the listening points, creating the impression of listening from within the cymbal's acoustic space. In this way, the instrument expands centrifugally, projecting its internal space into the room and physically incorporating the listeners into the instrument.
Branchi prescribes a minimum of four microphones placed around the suspended cymbal, each corresponding to one loudspeaker located at the corners of a square. The space acts as both resonant body and reflective medium, situating the listener inside the system’s dynamics. 3
While European composers in this period explored systemic relationships through formalized rules and compositional models, American composers such as John Cage developed approaches in which scores are themselves generative systems, emphasizing open processes rather than fixed outcomes.
In his Imaginary Landscape No. 4 (circa 1949, first performed in 1951), Cage employed twelve radios exploring live electronic generation of sound. Earlier works, like Imaginary Landscape No. 1 (1939), combined recordings of constant and variable pitch frequencies with conventional percussion instruments. In some of Cage’s early works from the 1960s, the score explicitly mentions the term feedback, making the generation of feedback an integral part of live performance. For instance, in Cartridge Music (1960), or in Electronic Music for Piano (1964).
Similarly, David Tudor’s performance of Cage’s Variations II (1961) shows how contact microphones and the free vibration of strings create a complex system of feedback and resonance that cannot be fully predicted (Perloff 2001).
During the 1960s, numerous live electronic performance groups formed, often employing extensive use of feedback. Among these were the Sonic Arts Union (SAU, founded in 1966) with Gordon Mumma, Robert Ashley, Alvin Lucier, and David Behrman; Musica Elettronica Viva (founded in 1967 in Italy) with Frederic Rzewski, Allan Bryant, and Alvin Curran; and the Once Group (Michigan) with Robert Ashley, Gordon Mumma, and others.
In SAU works, particularly those by Gordon Mumma, feedback circuits in live electronic music became central to a musical practice where performance and system architecture are inseparable from the compositional act itself. Mumma describes his compositional process as follows:
My own electronic music equipment is designed as part of the process of composing my music. I am really like the composer who builds his own instruments, though most of my ‘instruments’ are inseparable from the compositions themselves [...] My decisions about electronic procedures, circuitry, and configurations are strongly influenced by the requirements of my profession as a music maker. This is one reason why I consider that my designing and building of circuits is really ‘composing.’ I am simply employing electronic technology in the achievement of my art.
This approach shifts the focus from the written score to performance and situatedness: perfromers become active agents within a complex network of living relations. For example, in Gordon Mumma’s Hornpipe (1967) for horn and Cybersonic console, the acoustic behavior of the space—its resonances, reflections, and frequency responses—is a compositional parameter, transforming the environment into part of the musical performance and dissolving the boundaries between notation, instrument, space, and performer.
With the availability of computers, new frontiers opened up for composers interested in systemic approaches. Particularly with real-time computing and human-computer interaction, the composer is no longer just an organizer of musical events, but becomes now a designer of processes that evolve in real-time. This transformation established systemic thinking as a fundamental paradigm in contemporary music.
1.1 THE DEVELOPMENT OF CHAOTIC SYNTHESIS
Although early insights into chaotic behavior emerged in early twentieth century, chaos theory was formally developed in the 1960s and 1970s, particularly through the work of Edward Norton Lorenz (Gleick 1987). Subsequently, chaos theory emerged as a framework for understanding deterministic yet unpredictable dynamics.
As in electronic music, a significant catalyst for this field was the advent of electronic computers. The ability to computationally model and analyze nonlinear dynamics provided new tools for exploring unpredictable behaviors in innumerable domains, revealed to be useful for artistic explorations too. As a result, scientific inquiry shifted from simply predicting and controlling systems to understanding their dynamics and relationships, allowing for an incredible variety of practical applications.
Towards the late 1980s, the increasing computational power of computers enabled real-time sound synthesis and processing, and initiated a new era in electronic music. This advancement made it possible to use computers not only for offline sound synthesis but also for live electronic music performance. Composers began experimenting with sound synthesis techniques based on iterated nonlinear functions (Di Scipio 1990, Truax 1990, Di Scipio 2001).
Between 1990 and the early 2000s, Agostino Di Scipio and others (Degazio 1993, Choi 1994a, Choi 1994b, Yadegari 2003) explored in depth the application of chaos theory to sound synthesis numerically solving chaotic equations while also enabling real-time control of these processes. More recent research has been conducted by (Pirrò 2017, Mudd 2019, Sanfilippo 2021a), focusing on complex sound generation based on modified chaotic differential equations.
What distinguishes the latter contributions from earlier work is a focus on the autonomy of the system over large time scales. Unlike earlier approaches, which often involved manual or predefined controls to guide signal generating chaotic behaviors, the works presented in the following sections aim at creating in systems unsupervised ways to evolve over longer time spans. In the context of the RITI project, the approach benefits from specially implemented control signal processing units (CSPu) which allow the system to dynamically and coherently adapt to sound signals. Based on that, the RITI process shows a unique capacity to generate emergent sound behaviors while remaining responsive to a partner performer's input.
1.2 RITI (ROOM IS THE INSTRUMENT) AS A RESEARCH PLATFORM
RITI (Room Is The Instrument) is a live electronics composition that builds upon the real-time simulation of chaotic differential equations. At the core of RITI is a feedback delay network (FDN) that enables coupling of complex sound generators (CSGs). The latter (discussed below in more detail) are further conditioned by filter banks modelled after specific sound materials (musical instruments or other), thus allowing the occurrence of modal resonances in the synthesized sound. During performance, control signal processing units extract parameters resampling the oscillators' output, and utilize such data to change the interaction of coupled oscillators, thus modifying their behavior.
Since the composition relies on chaotic equations, it is inherently sensitive to initial conditions. In a performative situation, even the smallest deviations, whether introduced by the performer or by the physical environment, can drastically alter the system's trajectory and future behavior, with the flow of subsequent interactions producing unique outcomes based on minute variations. The interaction between the performer and the system defines the performance process and, ultimately, the global form of the piece. By adjusting the weights in the network, the performer shapes the chaotic dynamics, thus engaging with a fine balance of process determinacy and emergent sonic byproducts. As Pirrò notes (Pirrò 2017), the use of chaotic dynamical systems leads to a fundamentally different process than typically found in traditional sound synthesis: the sonic outcome is largely unknown a priori, and the composer formulates rules of evolution rather than specifying perceptual appearance. This approach resonates with the concept of non-standard synthesis techniques, as discussed by Döbereiner (Pirrò 2017), in which sound is defined by the process generating it rather than by perceptual expectations. An exploration of the behavior space generated by this process is necessary in order to construct the piece.
All signal processing operations in the RITI project is implemented in the FAUST programming language (Orlarey 2004, GRAME FAUST), which compiles into C++ and supports various platforms. This ensures cross-platform compatibility while maintaining the same software infrastructure.
2. COMPLEX SOUND GENERATORS
The concept of complex sound generators refers to sound synthesis modules that numerically solve chaotic equations in a controlled manner. This idea is elaborated and further developed in recent works, particularly by Dario Sanfilippo (Sanfilippo 2021a). Sanfilippo also introduces mathematical constraints to chaotic equations. These constraints allow for the exploration of chaotic equations even in unstable regions. This is done, for example, with a mix of DC blockers and waveshapers, centering the signal on the x-axis and limiting it into the [-1, +1] range (most chaotic models would, by themselves, provide signal values outside the range). Sanfilippo terms a single module that meets such requirements a Complex Sound Generator (CSG).
In RITI, CSGs are built using Duffing oscillators. In addition to the constraints proposed by Sanfilippo, and following an approach outlined by Tom Mudd (Mudd 2019) bandpass filtering is introduced in the Duffing differential equation. As previously noted, this allows the system to manifest modal resonances reminiscent of a musical instrument's behavior. Overall, that seems a consistent approach, as in fact both physical modeling synthesis and chaotic oscillators require simulating nonlinear complex system equations (Rodet 1999, Mudd 2019).
I started by solving the Duffing equation, as (Mudd 2019). Other methods of numerical solution of the Duffing oscillator can be found in papers by (Degazio 1993, Yadegari 2003). The latter shows how to solve numerically various chaotic oscillators for sound synthesis using the fexpr~ audio-rate object he and Miller Puckette designed in the Pure Data programming environment. Besides the Duffing oscillator, one may consider the numerical solutions of many other chaotic equations, some of which have already been implemented in the FAUST programming language (Sanfilippo 2021a).
2.1 THE MODIFIED DUFFING OSCILLATOR
The Duffing equation is a second-order nonlinear differential equation that describes the behavior of an oscillator with a cubic nonlinearity. Specifically, it is an ordinary differential equation (ODE), i.e., an equation involving derivatives of a function of a single variable (time):
\begin{equation}
\ddot{x} + \delta \dot{x} + \alpha x + \beta x^3 = \gamma \cos(\omega t) \tag{1}
\end{equation}
The Duffing equation was first studied by Georg Duffing in the 1910s, as a model for the motion of a mechanical system with a nonlinear spring. It can be found in his original book (Duffing, 1918).
Here the function \( x(t) \) in equation (1) represents the displacement of the oscillator as a function of time. The derivatives are:
\begin{equation}
\dot{x}(t) = \frac{dx}{dt}, \quad \ddot{x}(t) = \frac{d^2x}{dt^2} \tag{2}
\end{equation}
where \( \dot{x}(t) \) is the velocity and \( \ddot{x}(t) \) is the acceleration. The parameters in equation (2) are:
\begin{equation}
\delta, \quad \alpha, \quad \beta, \quad \gamma, \quad \omega \tag{3}
\end{equation}
where:
This system exhibits various behaviors, including harmonic oscillations and chaotic motion, depending on parameter values. The term \( \beta x^3 \) makes it a nonlinear system, while the presence of \( \gamma \cos(\omega t) \) indicates external forcing. A discrete version of equation (1), as found in (Mudd 2019), is given by:
\begin{equation}
X_{n+1} = Y_n \tag{4}
\end{equation}
\begin{equation}
Y_{n+1} = K Y_n - \alpha f (X_n^3) - B \cos(\omega T) \tag{5}
\end{equation}
Taken together, equations (4) and (5) represent a discrete mapping of the continuous-time Duffing equation. In FAUST, a implementation with fixed parameters follows the same structure (see Listing 1), solved numerically via Euler’s method 4.
The Figures 1 and 2 illustrate the outputs of the Duffing oscillator generated by the FAUST implementation over the first 250 milliseconds at a sampling frequency of 192 kHz. Figure 1 shows the 2D phase space trajectory, in which the position of the oscillator is plotted against its velocity, highlighting the nonlinear dynamics and the chaotic behavior of the system. The trajectory in this plane reveals the structure of the underlying chaotic attractor. Figure 2 displays the corresponding waveforms of the oscillator, one below the other, showing the amplitude variations over time. Together, these visualizations provide insight into both the temporal and phase-space behavior of the oscillator under the chosen parameters (in Listing 1).
Using FAUST's functional grammar, we can rewrite the Duffing oscillator code (as in Listing 2). This produces the block diagram of the oscillator as in Figure 3. The linear coefficient alpha = -1.0 is algebraically simplified by incorporating the negative sign into the subtraction operation, effectively transforming the linear term into a positive contribution that yields the simplified form x - x³. Negative signs are resolved within the signal flow, consolidating the mathematical expression into its essential form. Parameters are renamed for semantic clarity: delta becomes damping (visible in the feedback multiplication from y), gamma becomes forcing, and omega becomes forcing_frequency, while dt (the integration time step) remains explicit. This algebraic simplification eliminates redundant constant multiplications and sign operations, allowing the code structure to directly mirror the signal flow topology. As a result, the implementation appears as the block diagram of the oscillator shown in the Figure 3, where each processing operation corresponds to a distinct circuit block without unnecessary terms (in this context).
The implementation then introduces the arc tangent function, in order to prevent the signal from exceeding the numerical range and to be able to also explore unstable, extremely chaotic regions. Finally, we add a set of bandpass filters, in order to further constrain the oscillator, and enable it to resonate at a specific frequency of the spectrum (see Listing 3). The latter coupling is achieved by adding 24 parallel bandpass filters to equation (6):
\begin{equation}
X_{n+1} = \sum_{i=1}^{24} BP_i(\arctan(Y_n)) \tag{6}
\end{equation}
\begin{equation}
Y_{n+1} = K Y_n - \alpha f (X_n^3) - B \cos(\omega T) \tag{7}
\end{equation}
These filters function as constraints on the oscillator, each filter having its own frequency. The set of filters is designed using Robert Bristow-Johnson's biquad filter coefficients (Bristow Johnson Audio EQ, Smith Digital Filters).
Some CSG parameters define the initial conditions for the system (in the code Listing 3 these include gain=0.5, damping=0.15, forcing=0.7, forcing_frequency=328.0). These parameters, along with others available in the GUI, can be modified in real-time by the performer. Notice, in Listing 3, the influence and external_Mic variables: these enable coupling between multiple signal generating processes in the network, as well as the coupling between those and the external environment. The former represents the interaction between multiple CSGs affecting the damping parameter, while the latter is a microphone input signal fed directly into the sin function of the Duffing oscillator, allowing the system to fold into itself through the external environment via loudspeakers and microphones. The FAUST diagram of the complete CSG is shown in Figure 4.
Within the network, the CSGs are coupled in a cross-feed configuration. Each CSG affects the damping factor of the other. The system equations describing this interaction are as follows
\begin{equation}
X_1^{n+1} = \sum_{i=1}^{24} BP_i(\arctan(Y_1^n)) \tag{8}
\end{equation}
\begin{equation}
Y_1^{n+1} = \left(K + X_2^{n-m}\right) Y_1^n - \alpha f(X_1^n)^3 - B \cos(\omega T) \tag{9}
\end{equation}
\begin{equation}
X_2^{n+1} = \sum_{i=1}^{24} BP_i(\arctan(Y_2^n)) \tag{10}
\end{equation}
\begin{equation}
Y_2^{n+1} = \left(K + X_1^{n-m}\right) Y_2^n - \alpha f(X_2^n)^3 - B \cos(\omega T) \tag{11}
\end{equation}
where variable m represents a delay (in samples) in the signal exchanges between the CSGs, with x₁ and x₂ affecting each other in a feedback loop. Equations (8) to (11) are implemented in FAUST using a configuration with 8 CSG as shown in code Listing 4. As discussed below, the mutual influence of CSG's incorporating nonlinear controls can be extended over larger time scales.
3. CONTROL SIGNAL PROCESSING UNITS
In the context of feature extraction, a distinction is made between low-level and high-level audio information processing. Low-level algorithms enable continuous feature measurement and operate with short analysis frames. In contrast, high-level algorithms are original designs informed by both perceptual principles and complexity theory, designed to analyze musically meaningful information (Sanfilippo 2021b).
From this perspective, control signals should be seen as the processing of extracted audio features, and their mapping at a low-frequency generates autonomous dynamical behaviors directly derived from the audio signals of the network. Control signal processing function as a kind of self-regulating mechanism in the network system itself allowing it to evolve autonomously over time (Di Scipio 2003). Di Scipio and Sanfilippo characterize such an approach as follows:
"autonomous" is not to be confused with "automated." Automation, implies centralized control. In typical computer music designs, sound events are “automatically” scheduled, or driven, by some formal rules (either a deterministic or indeterministic process), which shape the musical flow in a domain entirely independent of – and fundamentally (in)different to – the medium of sound (be it understood as signal or as a physical and perceptual phenomenon). In our design, [...] we develop larger musical articulations out of the material acoustical environment and [...] situated acoustical events
In RITI, control signal processing units (CSPUs) rely neither purely on audio feature extraction from audio signals nor use automated controls. Instead, they act as nonlinear controllers meant to project the system’s dynamics over larger temporal and gestural frames, enabling complex dynamics spanning across the network 5. Hence, the CSGs network operates as a fully interconnected, large-scale chaotic system, where control signals are intertwined with audio signals.
3.1 CONTROL SIGNALS
The CSPUs in RITI are composed of two main elements: the trigger section and the sampling section. The triggers generate a low-frequency stream of single-sample pulses at uneven time intervals, as shown in Figure 5.
These triggers (one for each parameter controlling the CSGs) are activated by a Dirac impulse at run time. The Dirac impulse triggers a sample-and-hold module that receives an external signal, unique to each trigger module. The sample-and-hold output controls the impulse train frequency, and the pulses from the impulse train are fed back into the sample-and-hold to update the train frequency, ensuring that each pulse is output at a different time frame. This creates an aperiodic pulse train mechanism. The input signal to the sample-and-hold is remapped in such as to produce a pulse period in the range of 4 to 20 seconds. The whole process is illustrated in Figure 6, Listing 5 shows a code sample where triggers are started following a noise signal (with a linear congruential generator).
The multiple triggers activate other sample-and-hold modules, each controlling specific CSG parameters. Various methods can be employed to abstract the final six signals (as shown in Figure 6), such as Linear Congruential Generators or other techniques. With this design, an aperiodically sampled sound source (a CSG or a microphone signal) affects the behavior of another CSG. The signals from the network are dynamically normalized with a lookahead limiter to ensure a consistent signal range. For reference, the implementation of the lookahead limiter can be found in (Sanfilippo 2022). Other methods for dynamically normalizing the input signals to the CSP units could be applied.
3.2 LARGE TIME-SCALES INTERACTIONS BETWEEN CSGS
The coupling between the CSGs can be extended to larger time scales through the use of CSPUs. This allows both the oscillators’ interactions and the parameters they control to evolve in time, introducing greater variety in the system's overall behavior. The complete network is shown in Figure 7. Note that extra simple delay lines have been added in the feedback loop (with delay times from ⁿ₁ up to ⁿ₈). This extends to the time scale at which CSG signals fed back into the network, while the control of the CSPUs pathways acting across multiple temporal resolutions — from the second to several minutes — enable the system to achieve a long-term autonomy while remaining sensitive to performer interventions. As a result, sound morphologies emerge, which may retain memory of previous states or behaviors. The FAUST code for the full RITI network is in Listing (6).
4. CONCLUSIONS
In continuity with the systemic approaches outlined in the first part of this paper, RITI positions itself within a lineage where musical form emerges from the system dynamics rather than from external structures. This lineage spans from the early cybernetic theoretical models of Roland Kayn and Walter Branchi, through scores as generative systems in the work of John Cage and circuits as scores in the work of Gordon Mumma and David Tudor, to the more recent complex adaptive networks developed by Agostino Di Scipio and Dario Sanfilippo. RITI reexamines the boundaries between instrument, environment, performer, and performance in continuity with this tradition, integrating the study of chaotic dynamics at the signal-processing level and drawing on contemporary advances in chaos theory and complex adaptive systems studies.
This work was conceived as a preliminary investigation laying the groundwork for developing a compositional methodology and strategy for building complex adaptive systems from scratch in live-electronics music. The project, exploring how chaotic and adaptive behaviors can emerge within self-regulating networks, provided initial insights on structuring future research methodologies and goals.
Future research will formalize the process of defining primitives in creating such systems. This includes designing novel models to obtain emergent behaviors in first-order feedback delay networks (FDNs), comb-like structures and novel chaotic oscillators, and ultimately developing methods for designing complex feedback-delay networks while documenting step by step how emergent behaviors can be obtained. Part of this program will involve a systematic analysis of methodologies developed by other composers in their works. In parallel, further development includes the study and documentation of feature-extraction methods enabling the systems to self-regulate themselves via cybernetic strategies and to allow them to learn and adapt to their environment.
While documenting these strategies and methodologies, new transduction methods beyond loudspeakers and microphones, such as actuators and contact microphones on resonant mechanical materials or other forms of physical systems (such as augmented autonomous instruments via human-machine interaction), will be investigated to extend the performative possibilities of Complex Adaptive Systems. These approaches may open new types of spatial interactions, studies on sound diffusion, and propagation of local systems.
By advancing these ideas, this research contributes to the growing field of complex adaptive systems in music and to understanding chaos, emergent behaviors, and unpredictability not as obstacles to be eliminated, but embraced as fundamental principles of sonic creativity. Beyond technical considerations, RITI invites a reconsideration of what it means to compose within a system, suggesting that composition today could may also involve designing the conditions under which sound systems evolve, adapt, and self-organize in relation to their environments.
This research is part of the author's doctoral research programme: DREAM (Dottorato di Ricerca in Estetica Artistica Musicale), Conservatory of Music ‘Alfredo Casella’, L’Aquila, Italy (DOT24BX87A, DM 629 Università/Afam; CUP: D11I24001070006).