Feedback is a cybernetic concept and a ubiquitous causal criterion involved in innumerable physical, biological, psychic, social and cultural phenomena (Calimani and Lepschy 1990). The history of electroacoustic technologies has itself been imbued with it since its inception – let’s recall Lee De Forest’s audion valve (essentially an electronic signal amplifier that could be used as an oscillator) (1910s) or Harold Black’s negative-feedback circuits (used as stable, non-oscillating amplifiers) (1920s), to mention only two fundamental achievements. Less known is perhaps that a telephone manufacturer based in London, Alfred Graham, already in 1895 placed a transmitter (telephone mouthpiece) and receiver (earphone) at the two ends of a carton pipe, managing to assemble what we might call a ‘feedback-flute’ (the pitch of generated tones could be determined by closing and opening the side holes of the pipe). For Scottish physicist John McKendrick (Hermann von Helmholtz’s earliest biographer), Graham’s was “a musical instrument completely independent of the normal mechanical methods”, one that in fact involved a feedback phenomenon later to be called (especially in Germany and other European countries) the ‘Larsen effect’.1
Readers are certainly aware of the centrality of audio feedback in the work of several musicians and composers, in a line of artistic explorations ranging from the most radical fringes of experimental music in the 1960s and 1970s to more recent and very recent endeavors. Since the ingenious artistic attempts pursued in the 1950s and 1960s by cyberneticians such as Gordon Pask and Nicolas Schöffer, generative and manipulative functions of feedback have never stopped having a strong appeal for sound artists of multiple generations. Numerous publications have documented and discussed the variety of approaches that have emerged across the decades, a.o. (Aufermann 2002, Sanfilippo and Valle 2013, Surges 2015, Gottschalk 2016, Van Eck 2017).
This paper describes simple but peculiar (and perhaps idiosyncratic) feedback mechanisms and related operative strategies that I have developed in my personal live electronics music and sound art practice. In order to evidence compositional and system-theoretical implications of larger interest and purport, my discussion is loosely inspired by constructivist epistemologies and ‘neo-cybernetic’ theoretical perspectives, providing a phenomenological and post-computational understanding of living systems and their embodied involvement with machines (Clarke and Hansen 2009). I’m interested in the dual constitutive function – ontological and relational at once – that feedback plays in musical ‘performance ecosystems’ (Waters 2007, 2011, 2021) and, in general, in hybrid performance infrastructures manifesting nonlinear dynamical sonic behaviors (Di Scipio 2003, Green 2013, Sanfilippo and Di Scipio 2017, Sanfilippo 2019, Melbye 2021). I’m interested in the agentive potential of complex feedback systems, and to reveal that potential in sound.
2. EMERGENT SOUND IN THE AUDIBLE ECOSYSTEMICS PROJECT
For my discussion to be grounded in the solid (yet uneven) terrain of personal practice, I’d like to refer to some of the sound-making strategies involved in a series of works I have composed and repeatedly revised across the last twenty years, titled Audible EcoSystemics. In that context, feedback provides the conditions of existence and the general criteria by which a live electronics performance might occasion some sound and might develop a larger, polyphonic dynamical sonic process. How does it happen, in the first place, that sound events emerge from (rather limited) technical resources operating in variable, contingent site-specific conditions? How might feedback sounds be further developed, then, in a process heard as the unfolding of an articulated musical flow?
To answer, I will overview sound-making strategies involving different resources: analog (electroacoustic transducers, amplifiers, etc.), digital (signal processing software) and mechanical (generic environments of arbitrary shape, size, and acoustical properties – e.g. concert halls, courts, flute pipes, the inside of a piano, a mouth's resonant cavities, glass bottles or vessels…). As evidenced later on, here ‘software’ primarily involves very simple algorithms operating not just in ‘real-time’ but in ‘real-space’ too, i.e. driven by control signals themselves born of the particular performance space, hence reflecting the site-specific acoustics. 2
At risk of appearing pedantic, in the context of this publication, I’ll start with the most basic instance of electroacoustic feedback. After all, that is exactly the more fundamental sound-making mechanism involved in several of my works.
2.1 The LAR mechanism: Audio Feedback with Self-regulated Gain
A condenser microphone (M1) and a dynamic loudspeaker (L1) stand in the performance place (S), few or several meters apart, maybe not too far from walls (or curtains, or other larger surfaces). They are connected (through one or more amplification stages) to realize a very basic electroacoustic chain: M1→L1→S. There’s no sound M1 should capture, though, no sound source save the minimal, barely audible turbulence of the background noise, in a situation of ‘silence’. This ‘sound-of-nothing’ is amplified and heard through L1, whence it comes back in S.
If amplification suffices, the L1 sound feeds back into M1 and the chain design closes onto itself, making a ‘reinjection’ circuit – a feedback loop. The amplitude level, the transductive technical features of M1 and L1, their relative distance, the distance from walls, etc. – all of that (and much more) sets the actual feedback loop gain. With not-too-high gain levels, what is engendered is an audible nuisance, a kind of ‘halo’: the sound reinjection decays more or less rapidly, in a kind of badly sounding, spectrally uneven reverb effect. With higher gain levels, the loop eventually enters a self-oscillatory regime, it may ‘ring’ or ‘howl’, as is often said. Because of the iterated reinjection, the barely audible but spectrally wide background noise accumulates in the loop and finally (quickly) yields an increasingly louder sustained sound of narrower spectrum – this is often heard as a peaking tone of definite pitch, or a tone cluster. That’s the Larsen effect: a self-sustaining feedback resonance occasioned by a positive feedback loop (FB+) (‘positive’ here means greater than unit gain). This is illustrated with a simple diagram in figure 1.
The sound onset curve and the actual spectrum width will depend not only on the gain but on several factors (including the mentioned relative distance and position between M1 and L1, several acoustical properties of S, not to mention the technical features of a transducer systems such as M1 and L1, e.g. its frequency response, sensitivity, directivity, etc.). The minimal non-linear transfer in any sound-related component will in some way affect the feedback process, shaping the timbre and the dynamics of occasioned sounds. In extreme cases, limitations and non-linearities (especially in amplifiers and transducers) may cause signal saturation: the spectrum gets wider and increasingly more varied or complex (harmonic distortion); amplitude, on the other hand, won’t really increase indefinitely, it can’t grow beyond inherent limits (relative to the maximum power or the amplification as well as to the sensitivity of transducers and the elasticity of included membranes). 3
In common sound engineering practice, audible feedback phenomena are a nuisance, a problem one should get rid of or substantially minimize. When direct level manipulation is not enough, one resorts to hard-limiting circuits, ‘feedback killers’ and alike devices (for an overview, see (Waterschoot and Moonen 2011)). In a different attitude, one may instead consider feedback as a resource, a deliberately designed sound-making mechanism one can play with.
To that aim, let’s pull in a real-time processing computer and make it calculate the Root Mean Square (RMS) over short, windowed segments of the microphone signal (that’s like an averaging filter, i.e. a simple finite-impulse-response filter with cutoff frequency close to, but higher than 0 Hz). This provides a continuing estimation of the input signal amplitude, allowing us to implement a basic ‘amplitude following’ technique – the simplest among ‘feature extraction’ techniques (Widmer et al. 2005). We then complement the RMS values and use the complemented signal to drive the FB+’s gain. In the context of my discussion, this low-frequency signal stands for a typical control signal (CNTRL). As illustrated in figure 2, the operation requires a second feedback loop embedded in the first one.
The particular CNTRL signal introduces a ‘negative’ feedback (FB-): any increase in the sound level will cause a decrease in feedback gain; any decrease in the sound level will cause an increment in feedback gain. There’s a mutual balance. Here, we call that a self-limiting, or better, self-regulating feedback loop – a kind of adaptive level regulator (or changement de niveau adaptative (Verfaille 2003: 162)).
The FB- loop will itself be affected by contingent factors of the kind already mentioned (relative to the acoustics of the performance space, the technical details of the equipment, etc.). It may also vary depending on the ‘window’ length in the amplitude-following operation, i.e. on its promptness to follow up on changes in the incoming signal level. In theory, all factors being balanced, the mechanism would work as an oscillator, producing a sustained and (almost) stable sine tone! In actuality, though, it materializes as a kind of hybrid oscillator (it comprises analog, digital and mechanical components), and a rather hypersensitive one, hardly isolated from external conditions, always affected by acoustic turbulences and incidental sound events occasioned by site-specific factors.
As illustrated in figure 3, FB+ and FB- nurture each other and integrate a double feedback loop, a ‘self-regulating Larsen tone’ generator mechanism (LAR).
The performance space (S) is certainly a constitutive factor of such feedback mechanism, not merely a ‘sound diffusion’ space. It is indeed generative (it provides the minimal acoustic energy necessary for sounds to happen) and transformative (it acts indeed as a kind of band-pass filter of uneven frequency response, reinforcing some frequencies, dampening others). At the same time, being the very source for the gain control, it is crucial to the adaptive, self-regulatory dynamics of the loop. Any wanted or unwanted perturbation in S may be susceptible to affect the mechanism’s operation (this might have non-marginal consequences on larger-scale features of the complete performance, as will become clear later on).
Notice (in figure 2) that microphone M1 serves both as a source of energy (to make the feedback loop actually resound) and as a channel bearing the information necessary for the self-regulatory mechanism. That arrangement establishes a deterministic relationship in the way LAR functions: any change in the FB+ loop will have a ‘parallel’ change in the FB- loop. We may introduce a second microphone, M2 (standing away from M1) only feeding the negative feedback loop, and keep M1 as the only source for the positive. The decoupling will allow the total mechanism to be more responsive to small changes in the surrounding space, and to respond in different ways even to the same change, depending on current differences between M1 and M2.
The LAR mechanism is essential to the live performance of Feedback Study (Audible EcoSystemics #2a, 2004). In that context, the engendered feedback sounds are not only heard as such, as they are also passed on to further sonic transformations (mentioned in a later section). Moreover, the actual performance equipment includes several microphones (four condenser mics) and a minimum of six loudspeakers (near-field active speakers preferred), plus of course a mixer (understood here as a veritable performance tool).
For the performance of Feedback Study, mics and speakers of the same type and/or brand are to be preferred. However, in a number of sound installations – including Feedback Study Installation (Audible EcoSystemics #2b, 2004) – speakers of different brands and qualities are involved, not all meant for professional use. In such cases, each transducer contributes its own unique ‘voice’ to the resultant polyphonic texture. In Private rooms (a sound installation from 2009), a number of miniature mics are used, coupled to several ear-buds (viewed here as ‘miniature speakers’); they are placed inside glass bottles and vessels, the latter playing the role of space S (a very small but strongly resonating enclosed environment).
With several microphones and speakers, a larger number of positive and negative feedback loops can be established, and multiple LAR instances are interlaced and entwined, materializing in a feedback network whose interactional dynamics is continually mediated by the local sound environment. The performance often consists ‘simply’ in exploring the audible phenomena emergent in such a network of sonic interactions, according to some rather loose general orientations (that I offer in the form of scripts, to be viewed as musical scores open to interpretation). This usually include manipulating the mic levels, individually or in combinations, and selecting different mic combinations as either sound sources or control sources. For simplicity, in the next sections, only sound-making mechanisms involving just one mic and one speaker are described, even when referring to performance contexts where several are actually involved.
Detour / 1
In theory, the negative feedback mechanism (FB-) can be likened to a usual limiter circuit. A typical limiter attenuates the signal according to detected amplitude peaks in input signal (Dutilleux and Zolzer 2002: 99-102). However, non-marginal differences can be pointed out. The FB- mechanism tracks the average amplitude in portions of the signal, not the amplitude peaks (in this regard, it is more consistent with the perceived loudness, not the actual level). But more relevant is that, in a typical limiter just like in other ‘dynamics processors’ (compressors, expanders), the input (signal prior to limiting) and the output (limited signal) are two different signals, whereas in our case they are in fact one and the same: because of the feedback loop, the input is exactly what has been just output by the limiting mechanism (save for a brief initial moment, due to the integration time in the RMS computation). In addition, the signal whose level we want to limit (M1) might not necessarily be identical with the one driving the limiter (M2): the different position in the room may result into significant phase- and amplitude-related differences. That’s not at all what happens in usual limiters.
Detour / 2
Since the 1960s, several musicians and composers have explored creative ways to handle and play with electroacoustic feedback phenomena. The earliest examples included probably Robert Ashley’s The Wolfman (1964) and Hugh Davies’ Quintet (1968). In such pioneering works, feedback was manually controlled (direct manipulation of input and output levels), and was anyway further limited by the limitations inherent to the available electroacoustic equipment. Early examples of controlled feedback mechanisms are found in the work of Gordon Mumma, David Tudor, Alvin Lucier, Eliane Radigue, among others. The first specially designed self-regulating feedback system is probably the one involved in Nicolas Collins’ sound installation Pea Soup (1974-1976). In 2001, Collins’s installation was re-implemented using computer-operated limiting processes. For an overview of more recent and very recent examples of automated feedback mechanisms in creative sonic endeavours, see (Sanfilippo and Valle 2013).
Let’s also remind ourselves that several ‘feedback instruments’ have been recently developed, ranging from Michelangelo Lupone’s feed-drum (Lupone and Seno 2005) to Adam Pultz Melbye’s feedback-actuated augmented bass (Melbye 2021) – to name just two. Of the utmost relevance, in my view, is the circumstance that such instruments are usually integral to particular compositions and/or to the peculiar performance practice of their author – much like in the case of Gordon Mumma’s cybersonic console in the mid-1960’s (Mumma 2015).
2.2 Delayed Feedback (LAR-Del)
Let’s go back to the LAR mechanism, and slightly extend its basic design. This time, we route the microphone signal into a computer-operated delay line (DEL), before passing it to the loudspeaker (figure 4). The operation is of course just a linear time shift of the signal samples, a ‘single-tap delay’. However, in this context, we will only consider delay times in the order of several seconds to several tens of seconds.
Again, there’s nothing that the microphone is to capture in particular, no sound source except the background noise in the performance place, as audible only in quiet situations. Again, if gain suffices, the loudspeaker sound will feed back into the microphone. This time the signal will accumulate in the loop at a slow rate, increasing stepwise at subsequent time spans equaling the delay time. After some iterations, the loop will start reinforcing the most prominent acoustic resonances in the performance place (yet always relative to the local background noise and other factors such as the mics-to-speakers distance, their technical features, etc.). That provides a kind of belated Larsen phenomenon, whose gain is kept under the control of the negative feedback mechanism. We have a self-regulating delayed feedback mechanism (LAR-Del). Clearly, any (deliberate or aleatoric) sound events in the surroundings will eventually enter and affect the ongoing process, at least to the extent that their spectra match the main resonant frequencies in the local space.
Is LAR-Del just a linear time shift of LAR? That would be so if the FB- loop had the same time delay as FB+; if the ‘window’ in the amplitude follower were proportionally enlarged; and if the actual background noise and all acoustical events in the surroundings were themselves proportionally ‘slowed-down’. The latter condition remains of course in the realm of imagination: we are rooted in the real-time and real-space conditions of direct experience.
The LAR-Del mechanism is crucial in Background Noise Study (Audible EcoSystemics #3a, 2005) and its variants (Background Noise Study, in the Vocal Tract, 2005, and Background Noise Study with own Silent Actions, 2014).4 In such works the main delay time is typically fixed to twenty seconds, yet it’s not always easy – upon hearing – to perceive a regular pace and to grasp a sense of cyclic repetition. Even less so when other sonic processes are made to overlap the main feedback loop (as discussed later). However, the long delay line does provide the performance with a kind of slower ‘breath’. It makes sure that – even on the verge of failing and resuming into silence – some new sound event will eventually (re)emerge from the ambient noise. Waiting, longing for that to happen, is an integral part of the performance.
The presentation of Background Noise Study requires two or more mics placed close to possible noise sources, however weak or fleeting their sonorous presence might be (doors, windows, light system, audience and chairs – or even the mixer and other equipment!). Aspects of this radical commitment to ‘making music with no sound save the feeblest, site-specific background noise’ are discussed elsewhere (Di Scipio 2011). I should only add that the whole Audible EcoSystemic project has been directly or indirectly inspired to the order-from-noise principle that Heinz von Foerster introduced, in the late 1960s, in order “to account for the complex behavior of living systems […] in terms of self-organizing [= autonomous] sytems” (Atlan 1972: 23). Noise, as related to random perturbations (sometimes even to patterned events) in a specific niche, is essential to the living (Hutchinson 1952, Maturana and Varela 1980, Von Foerster 2003).
Detour / 3
The LAR-Del mechanism is very reminiscent of Alvin Lucier’s pioneering I am sitting in a room (1970).5 However, Sitting takes off from a certain kind of sound, i.e., a voice that reads a text (not necessarily the text the composer offers in his prose score). Albeit under-determined prior to actual performance, a speech is a sound event not only very different but of much more peculiar spectral and dynamical shape than a quiet room’s background noise. Furthermore, Sitting demands to always keep an eye on the total sound level and manually adjust the gain as appropriate to make sure the process does not build up excessively nor fade into silence.6
As briefly illustrated above, the delayed feedback loop designed for the Background Noise Study provides not only a self-sustaining but also a self-regulating dynamic. Sounds emerging in the delayed feedback are not only heard as such, as they provide a basic sounding layer subject to further elaborations, whose sounding results in turn enter and affect the continuing delayed loop – thus creating a network of second- (and greater-…) order loops. I will return to the latter point in a later section.
2.3 Background Noise Down-sampled: Self-regulating Pulse Code Modulation (SRPCM).
Let’s pick up again the basic positive feedback mechanism (FB+). Let’s multiply the microphone signal by a real-time computer-synthesized pulse train signal (IMP) (figure 6). The term ‘pulse train’ here means a sequence of clicks. More precisely, a sequence of digital samples (non-zero, positive values) alternating with shorter or longer silent segments (zero-valued samples). The individual pulse duration stands in inverse proportion to the sampling rate (Dt = 1/SR).7 Pulse rates taken into account, here, will lie in the very reduced range of 0 to 20 Hz.
Basically, that represents a particular instance of amplitude modulation, or better a ring modulation (pulse signal only positive). It is indeed equivalent with pulse code modulation (PCM), i.e. the most ubiquitous sampling method in the world (of mainstream digital audio).8 Nonetheless, because only sub-audio pulse frequencies are used here (lower than 20 Hz), we rather have a very poor ‘down-sampling’ process, so poor the sounding results are only heard as rhythm patterns of clicks.
With a fixed pulse rate, we get a deterministic, periodic pattern of differently loud clicks (IMPd). With random rate variations, we get a kind of statistically varying texture of clicks (IMPs). In the latter case, it seems more appropriate to speak of the density of the resultant pulse sequence, instead of the rate.
Now, because the sound captured by the microphone is itself a somewhat random signal (background noise and ambience sounds), we finally obtain either a periodic pattern with random accentuations, or a cloud of statistical scattered clicks (IMPs) (figure 7). The character of resultant texture can be likened to a kind of ‘sonic dust’, a texture of tiny ‘sound dots’ whose timbral coloration is typically affected by the site-specific acoustics and other contingent factors.
Of course, we want to turn that into a self-regulating mechanism. First, let’s have the pulse amplitude change in direct proportion to the sound level tracked down in the performance place (amplitude-following of the microphone signal): the louder the room sound, the higher the pulse train amplitude. Second, let’s have the pulse rate also change, but in inverse proportion to the room sound level: the louder the latter, the lower the pulse rate – or, the sparser the statistical density of clicks. By overlapping the two (figure 8), we end up with a self-regulating Pulse Amplitude Modulation (PAM) folded in a self-regulating Pulse Rate Modulation (PRM).
If gain level suffices (and if a rather reverberant room is chosen as the performance space S), this mechanism turns into a peculiar feedback process: however faint, the resultant pulse trains will elicit innumerable acoustical reflections from walls and other surfaces, adding to the background noise and the total ambience sound and causing a greater or smaller increase in energy (PAM, positive feedback); that is going soon to be counterbalanced or compensated by the decrease in pulse rate or density (PRM, negative feedback). The two are like antagonist but mutually feeding processes: the former enhance the background noise and hence increases the loudness of a myriad of sonic micro-events, while the latter reduces the density of micro-events, preventing the indefinite growth of the former. The whole is an intricate Self-Regulating Pulse Code Modulation mechanism (SRPCM) operating within the main positive feedback loop (figure 9).
Because deterministic (IMPd) and random (IMPs) pulse trains are overlapped, and because different combinations of amp and rate modulation can be established, the actual mechanism becomes an intricate network. Depending on the mapping of control signals onto frequency and density ranges, audible correlations and interdependencies may characterize both the micro-level details and the larger-scale, global properties in the resultant sonic texture. The accurate mapping of real-time generated control signals onto perceptually relevant correlates of signal processing variables is integral to an approach of ‘composing sonic interactions’ (Di Scipio 2003, 2008). In actual work with the Audible EcoSystemics project, several mapping strategies are elaborated and a whole network of control signals is shaped up, based on two or more microphone inputs (Di Scipio 2020: 135-163).
The SRPCM mechanism is crucial to the performance of Impulse Response Study (Audible EcoSystemics #1, 2003) and Untitled (a sound installation meant for reverberant rooms, 2005). It might be interesting, in this regard, that the performance of Impulse Response Study can be joined by two or more percussionists using little claves or hand-claps to stimulate the room’s – hence system’s – response. In the ‘performed sound installation’ Silence Study (Audible EcoSystemics #4, 2018), SRPCM mechanisms are modified to include multiple delay lines of several seconds. Therefore, the sound dots born now of the background noise, are only heard and reinjected in the loop later. That establishes a delayed feedback loop similar to LAR-Del but in the rhythm domain, resulting in porous textures of micro-particulate events of sound.
Detour / 4
In signal processing theory, a pulse train is a periodic signal approximating a series of equally spaced Dirac functions – sometimes it is called a ‘Dirac comb’ (Jutten 2009). Ideally, the Dirac function has an infinitely large and flat harmonic spectrum. A single digital audio sample can only provide a very rough approximation of that. Still, the ‘click’ thus approximated can be critical when played through a loudspeaker: its actual sound reveals, to the ear, the non-neutrality, the peculiar ‘voice’ or timbre of the loudspeaker (the ‘impulse response’ of loudspeaker systems is a typical quality criterion in professional loudspeaker design, and correlates with its ‘frequency response’). That is why the timbre coloration of the SRPCM pulse trains is rather different on different loudspeaker systems, across performances.
This ‘merely technical’ circumstance connects well to a more general attitude in creative approaches on electronics: audio technologies are never neutral means of sonic communications, even less so in practices where one inventively appropriates or designs her/his own tools and make them part of the artistic meaning. In the context of the Audible EcoSystemics project, no technical device or resource involved in the performance is viewed as a neutral, transparent channel – neither of sounds (acoustical, mechanical sources) nor of signals (analog and digital sources). In my view, all components in a ‘performance ecosystem’ (Waters 2007) are active operators that leave audible traces in the resultant fabric of sound. Nothing in the material space of performance, and no piece of equipment, is totally foreign to sound. Even an audience, with its group behavior, is an active force in the performance ecosystem. Sound always emerges in the mutual entanglement and the co-dependent dynamics of material mediators sharing specific time and space conditions.
2.4 Pulse-width Modulated Feedback (LAR-PWM)
A pulse train can be modulated not only in its rate (like in the PRM mechanism above), but also in the pulse ‘width’. In which case, the duration of individual pulses (their ‘width’) is made longer than one sample. Let’s have it driven again by a CNTRL signal. That requires (an arbitrary) mapping of amplitude values onto duration values (few tens to hundreds of samples). We obtain a Self-Regulating Pulse-Width Mechanism (SRPWM), merged in the main, positive feedback loop (figure 10).
Depending on the current pulse width, the feedback loop gain will be either switched ‘on’ (pulse amplitude > 0) or ‘off’ (pulse amplitude = 0). With enough pulse width and feedback gain, the sound feeds back into the loop (gate open), ‘ringing’ for a moment, soon followed by a shorter to shortest pause (gate close). This is repeated 0 to 20 times per second, hence results in a kind of segmentation or granulation of Larsen tones – a quick ‘chopping’, or better a ‘gate modulation’ of a high-gain feedback loop. However, the squared pulse may not always be as wide as to let the loop to actually sound. As roughly sketched in figure 11, with a much reduced pulse width (close to a single digital sample), there’ll be no chance for the feedback loop to ring, therefore only a sequence of clicks will be heard. Only with (relatively) wider pulses the loop will effectively ring. In a way, this mechanism provides another way to dampen or counterbalance the underlying positive feedback loop (FB+), as it functions as a negative feedback mechanism (FB-) but having its own dynamics and peculiar side-effects.
The actual interaction of pulse width, feedback onset and feedback resonant frequency may reveal highly nonlinear and hard to manage. To better handle it, one can introduce one more negative feedback mechanism (FB-) at the end of the pulse modulation, driven by the same CNTRL signal used to modulate the pulse width, or by another one (denoted as CNTRL2 in figure 12).
Such an intricate process will eventually manifest second-order behaviors heard as slower variations in amplitude and rate (density) of micro-sound events. As a side-effect, it will impact on the main feedback loop’s frequency response, varying the resonant frequencies in function of the changing pulse width (in addition to the several factors having a role in the main feedback loop). The complete mechanism is now a self-regulating pulse-width modulation of Larsen tones (LAR-PWM). For the essential, it can be schematized as in figure 13.
At least two such mechanisms are involved in the Impulse Response Study performance, each with its own microphone-loudspeaker pair, each with its independent self-regulating mechanisms. When operating concurrently, they affect one another in ways either constructive (the one feeds the other and enhances its sonic behaviour) or destructive (the one inhibits and contrains the other, either to the point of making it a double of itself, or to the point of silencing it).
The present mechanism may occasion peculiar audible artefacts: when the main feedback loop happens to reinforce low-frequency resonances, the pulse signal will cause noticeable ‘tocs’, i.e. glitches typically due to the squared profile of each subsequent pulse (a kind of short rectangular envelope). The point is not so much that such sonic by-products should be considered aesthetically unacceptable – this is not the place for purely aesthetic criteria. The point is, this side-effect may constraint further interactions, narrowing the sonic potential in the long run. (this is a case of ‘downward causation’: emergent phenomena may affect – limit, in this case – the process they are born of). One can pass the pulse signal through a high-order low-pass filter, smoothing out the pulse’s neatly squared profile and approximating a more gentle, quasi-Gaussian envelope shape (one may need to balance the filter’s loss of energy). The intended optimal result is illustrated in figure 14. The LAR-PWM mechanism now resembles a more usual form of amplitude modulation, or perhaps as a ‘granulator’ embedded into a resonating feedback loop.
3. HIGHER-LEVEL PROCESSING AND LARGER-SCALE SONIC DEVELOPMENTS
As previously mentioned, in the Audible EcoSystemics works, feedback sounds are usually passed on to further processing. The idea is to expand the basic sonic layer and achieve a certain polyphony, a certain overlap of different but strongly interrelated timbral and dynamical shapes, possibly through a process capable of (relatively) autonomous, non-supervised developments over longer time spans.
The effort is to adopt very simple digital signal processing operations – multiple delay lines, resampling techniques (with related artefacts) and granular techniques (with related statistical dispersion of energy in the frequency and the time domain) – and turn these into mutually dependent component processes in a more encompassing transformational system. In the following, I limit myself to only two such transformational processes, and emphasize the functional strategies they are to operationalize in the larger generative sonic network.
3.1 Resampling and Multiple Delay Lines (RED)
Consider a generic real-time ‘sampler’ (SAMP), with a digital audio buffer (of several seconds) that gets cyclically written (temporary storage of samples) in addition to being cyclically read (sequential pick up of samples). Suppose the read cycle proceeds at an arbitrary speed, resulting in the resampling of the temporarily stored signal, with related frequency shifts (heard as pitch transpositions, within certain boundaries). Suppose the input to SAMP comes from the LAR feedback loop (or any other of the above-discussed feedback mechanisms). And suppose that the resampling speed is driven by a control signal (CNTRL) shaped up as a (direct or inverse) function of the detected amplitude in the room sound (one has to devise some appropriate mapping from amplitude values to sampling speed ratios). The overall process is schematically illustrated in figure 15.
Let’s now refine it a bit, adding a delay line at the SAMP’s output (figure 16). Only the delayed signal is passed on to the loudspeaker, not the ‘dry’ signal. Further, we want to vary the delay amplitude level as well as the delay time, driving it with a second, appropriately mapped control signal (not the same as the one driving SAMP’s speed). For example, the delay time could vary in direct proportion to the M2 signal level, while the delay amplitude could vary in inverse proportion to that same level. The complete design, illustrated in figure 16, could be called REsample and Delay (RED).
Implicit in figures 15 and 16 is the fact that microphone M2 and its partner loudspeaker L2 are in the same performance space (S) where microphone M1 and loudspeaker L1 also are, so the main feedback loop (LAR) and the resampling & delay mechanism (RED) operate in the same material environment. The resampled sound may – if gain suffices – enter the fundamental feedback loop. Accordingly, LAR feeds RED but may be eventually also fed by it. One can also say that one feeds back into itself through the other – and vice versa. The two are structurally, permanently coupled: once started, they establish one and the same larger system (figure 17) of unique interactional dynamics, whose sounding manifestations range from subtle textural nuances to more dramatic gestures. That’s what they are to accomplish in the performance of both Feedback Study and Background Noise Study.
3.2 Dispersive Granular Processing (GRD)
In this paper, the term ‘granular processing’ (or ‘granulation’) refers to a particular form of real-time resampling. As the incoming signal samples are written and temporarily stored in a (not-so-small) audio buffer, tens or hundreds of sample chunks are read off the buffer and enveloped, yielding tens or hundreds of ‘sound grains’. Signal chunks can be retrieved in linear, sequential fashion (advancing from the first sample to the last) or in some nonlinear fashion (either deterministic or random).
This is not the place to discuss granular synthesis theory and techniques of real-time granulation (several approaches are discussed in (Roads 2001)). Overall, such techniques allow us to adopt compositional strategies at a micro-time scale in the sound (say, in the order of the centiseconds, if not smaller). They provide ways to work with variable densities of acoustic energy, shaping up smoother or more porous and abrasive textures of sonic droplets – not so different from a ‘sound dust’ of randomly controlled pulse trains. In the present context, the idea is more precisely to foster a dispersion of sonic energies, scattered in innumerable sonic particles.
Let’s consider a polyphonic granulator, i.e. a real-time digital signal processing algorithm with several overlapping granular streams. Let’s call it GRAN. Its task is to ‘granulate’ or crumble the sounds born of the LAR mechanism (or any other mechanism among those illustrated above). If feedback conditions allow, it may eventually process its own output, in a kind of iterated granular processing that vaporize and dissolve the incoming sound.
Suppose GRAN’s density (amount of sound grains per second) and amplitude level are being driven by some CNTRL signal, the latter reflecting the total sound level in the performance space S (including of course the GRAN output sound). Taken together, density and amplitude are perceptual correlates of loudness. Let’s have them dynamically controlled with two opposite mappings of one and the same CNTRL signal: the higher the density, the weaker the amplitude – or vice versa. We may add a third variable parameter, grain duration, and have it controlled in a way either contrastive or supportive of the other two variables. The complete mechanism will create peculiarly porous sonic textures, whose micro-level articulation is permeated by a mix of counterbalancing forces, either thickening-up or deflating the input signal energy. Let’s call GRD such self-regulating ‘granular dispersion’ mechanism (figure 18).
4. A GENERATIVE NETWORK OF INTERDEPENDENT MECHANISMS
In the previous sections, the LAR mechanism has been taken as the only input to higher-level processing (that matches the design particular to the Feedback Study). However, other lower-level mechanisms can be used, such as LAR-Del (like in Background Noise Study) or LAR-PWM (Impulse Response Study). A mix of all these, together with variants of the self-regulating pulse code modulation mechanism (SRPCM), are concurrently involved in Silence Study and other works (especially sound installations).
It is important to add that the higher-level processing mechanisms, while introducing a larger sonic potential, also pursue a more systemic function. The RED task is essentially to repeat, multiply and vary the sonic layer(s) occasioned by the underlying feedback mechanisms. The GRD task aims instead to dissolve, consume, tear apart the incoming feedback sounds. One is an operator of redundancy and growth, the other an operator of dispersion and decay. They provide mutually contrasting tendencies in a larger system of peculiar dynamics. Their sound will interfere in different manners with the lower-level loop(s) that feed(s) them, creating different but interdependent paths in a more overriding recursive logic.
The latter observation implies a need to characterize the higher-order systemic unit that results from interacting low-level mechanisms. For reasons of clarity, I have discussed the single mechanisms as separate, independent designs, each with its basic component parts. But in actual work, they are assembled together, intermingled and interconnected in various ways. For example, in the Feedback Study, three RED samplers are mixed and the mix is further processed by GRD mechanisms (figure 19).
In other words, signal transformations are indeed fed by a lower-level feedback loop, but they also feed one another. Each work in the Audible EcoSystemics project has a unique arrangement of both lower-level feedback mechanisms and higher-level transformational ones. The notable exception is the Silence Study (Audible EcoSystemics#4), which in fact involves a mix of self-regulating feedback loops, with no further processing. In any case, several sound-making and sound-transforming mechanisms are made to operate concurrently and interdependently.
Digital signal processing operations are of course made to work either in parallel (mix of separate processes) or in series (cascaded processes) (figure 20).
However, a clear distinction of ‘parallel’ and ‘cascaded’ is only appropriate as referred to the software implementation of the digital processing methods involved. In the present context, it remains mostly a manner of speaking. The sounding result of both the feedback loop and the transformational mechanisms are heard in the loudspeakers and recirculate in the microphones through the room, feeding not only the audio but also the control signal generation. Therefore, in a very concrete sense, the various mechanisms always function as an intricate network, a hybrid assemblage of cascaded processes.
That circumstance is hardly illustrated with a simple schema or diagram. In Figure 21, one can only grasp that, with sufficient gain (which in no ways implies loud or deafening sonorities), one or more feedback mechanisms (FBM) and one or more digital signal processing algorithms (DSP) are entwined to make a larger generative and transformative system network, coupled as they are through and by the surrounding space and the electroacoustic terminals involved (microphones and loudspeakers).
Depending on the particular arrangement of several such mechanisms, a range of nonlinear dynamical behaviors is likely to emerge, whose sonic potential is creatively explored across the performance. The performance activity itself might or might not follow a precise plan. The latter is anyway to be made by the performer(s), and yet its actual proceedings will inevitably remain strictly dependent on the real-time real-space system dynamics set in place.
5. CONCLUSIONS. FEEDBACK AS A RECURSIVE OPERATIONAL LOGIC
In conclusion, the strategy common to the works included in the Audible EcoSystemics project is threefold: in different ways, the performance process is to (1) accumulate site-specific background noise in one or more electroacoustic feedback loops, turning the latter into idiosyncratic sound generators; (2) subject the feedback sounds to some simple digital signal processing transformations; (3) drive relevant variables in both the feedback loops and the signal processing with control signals reflecting auditory properties of sounds circulating in the room, turning the individual processes and the whole network into self-regulating mechanisms. The latter point has been illustrated leaning on amplitude-following as the only feature extraction method for the control signal generation. However, one can of course resort to other feature extraction methods, creatively mapping the extracted data onto viable control signals.9
Once the performance is started, it’s hard to make a distinction between the background noise or any other sound of the particular performance space, and the sounds occasioned by the performance process in that space. Except in very quiet moments, the latter will mask the actual background noise they are born of. More in general, it’s hard to tell what is input and what is output: as they circulate in the performance space, any output sound quickly turns into an input. Also, as control signals continually shape the sonic process, the resultant sound in turn sets the conditions for the ongoing generation of the control signals, in an irreducible entanglement of energy (sound) and information (about sound).
With this recursive logic, feedback is indeed structural across time scales (audio as well as sub-audio, in sound as well as in sonic shapes and gestures). It permeates larger developments and each particular exchange among component parts. This recursive logic of feedback is both ontological (without, there would be no sound) and relational (it makes music happen as specific to shared conditions, adapting to the different performance sites).
The Audible EcoSystemics project reflects a practice meant less than a way of experiencing sound as a material entity (as if it were a sonic object to be developed and articulated according to independent musical plans) and more in order to experience it as an event emerging from a specific performance environment and forged by related material conditions. Microphones, loudspeakers and places are structural constituents of the musical performance – not just functional, incidental components that might be replaced without audible side-effects. They provide specific affordances and thus exert some kind of agency. The complete performance ecosystem, integrating the work as a material dispositif (Baranski 2009), consists of a hybrid assemblage (of human, mechanical, electronic and software resources) that manifests itself as a situated network of sonic interactions (Di Scipio 2003, Waters 2007, Di Scipio 2020, Waters 2021). The kind of agency proper to such a composite dynamical whole might be called ‘interagency’ or rather ‘ecosystemic agency’ (Sanfilippo and Di Scipio 2017, Di Scipio 2021). Practising feedback as a recursive logic of ontological and relational relevance to performance, may help a more materialistic and post-computational understanding of multi-agent musical systems.
Info and credits
Impulse Response Study excerpts
(Warsaw 2019) live rec, Novy Teater, Warsaw Autumn Festival
(Rome 2017) live rec, Teatro in Scatola
(Weimar 2008) live rec, Weimar, KammerMusik Salle, Ferenc Listz
(Salerno 2012) live rec, Salerno, Conservatorio di Musica
(L'Aquila 2015) studio rec, L'Aquila
Background Noise Study
(Berlin 2005) live rec, Tesla/Podewilschen Palais, Inventionen Festival
(Rome 2008) live rec, Rome, Parco della Musica
Silence Study - Installation excerpts
(Aarhus 2016) live rec, Aarhus Kunsthal, SPOR Festival
Agostino Di Scipio: live electronics and computer processing (all tracks), claves (Warsaw 2019, Rome 2017)
Dario Sanfilippo: computer processing (LíAquila 2015)
Federico Placidi: Serge analogue synthesizer, claves (Rome 2017)
LIST OF REFERENCES
Ashby, W. Ross. 1957. An introduction to Cybernetics, Chapman & Hall.
Atlan, Henri. 1972. “Du bruit comme principe d’auto-organisation”, Communications, n.18.
Aufermann, Knut, ed. 2002. "Feedback". Special issue, Resonance Magazine, 9(2)
Baranski, Sandrine. 2009. “Manières de créer des sons: l’oeuvre musicale versus le dispositif musical (expérimental, cybernétique ou complexe)”. DEMéter. Accessed 8 December 2021. http://demeter.revue.univ-lille3.fr/lodel9/index.php?id=260.
Bullock, Jamie. 2008. Implementing Audio Feature Extraction in Live Electronic Music. PhD, Birmingham City University.
Calimani, Riccardo, and Lepschy, Antonio. 1990. Feedback. Guida ai cicli d iretroazione: dal controllo automatico al controllo biologico, Garzanti.
Clarke, Bruce, and Hansen, Mark, eds. 2009. Emergence and Embodiment: New Essays on Second-order Systems Theory, Duke University Press.
Di Scipio, Agostino. 2003. “Sound is the interface. From interactive to ecosystemic signal processing”. Organised Sound 8 (3): 269-277.
Di Scipio, Agostino. 2005. “Per una crisi del “live-electronics”. I am sitting in a room di Alvin Lucier”, Rivista di Analisi e Teoria Musicale 2.
Di Scipio, Agostino. 2008. “Émergence du son, son d’émergence”, Intellectica. Sspecial issue on Musique et cognition, Revue de l’Association pour la recherche cognitive 48/49.
Di Scipio, Agostino. 2011. “Listening to yourself through the Otherself: on Background Noise Study and other works”, Organised Sound, 16 (2).
Di Scipio, Agostino. 2020. "Qu’est-ce qui est ‘vivant’ dans la performance live electronics? Une perspective écosystémique des pratiques de création sonore et musicale." PhD thesis, EDESTA, University of Paris VIII.
Di Scipio, Agostino. 2021. “Thinking Liveness in Performance with Live Electronics. The Need for an Eco-systemic Notion of Agency”. In Sound Work. Composition as Critical Technical Practice, edited by Jonathan Impett. Orpheus Institut Ghent / Leuven University.
Dutilleux, Pierre, and Zoelzer, Udo. 2020. “Nonlinear Processing”. In Digital Audio Effects, edited by Udo Zölzer. John Wiley & Sons.
Green, Owen. 2013. "User Serviceable Parts. Practice, Technology, Sociality and Method in Live Electronic Musicking." PhD thesis, City University.
Gottschalk, Jenni. 2016. Experimental Music since 1970, Bloomsbury.
Hutchinson, Evelyn. 2003. “Turbulence as random stimulation of sense organs”. In Cybernetics. The Macy Conferences (1946-1953), edited by Clause Pias. Diaphanes.First published 1952 (Proceedings of the Macy Conferences).
Jutten, Christian. 2009. Théorie du signal, Université Joseph Fourier, Grenoble.
Lupone, Michelangelo, and Seno, Lorenzo. 2005. “Gran Cassa and the Adaptive Instrument Feed-Drum”, in Computer Music Modeling and Retrieval, edited by R. Kronland-Martinet, T. Voinier and S. Ystad. Springer.
Mathews, Max. 1969. The Technology of Computer Music. Cambridge, MA., MIT Press.
Maturana, Humberto, and Varela, Francisco. 1980. Autopoiesis and cognition. The realization of the living, Reidel Publ.
Melbye, Adam Pultz. 2021. “Resistance, Mastery, Agency: Improvising with the feedback-actuated augmented bass”, Organised Sound, 26 (1).
Mumma, Gordon. 2015. Cybersonic Arts: Adventures in American New Music, edited by Michelle Fillion. University of Illinois Press.
Oppenheim, Alan, and Schafer, Ronald. 1974. Digital Signal Processing, Prentice-Hall.
Peeters, Geoffrey. 2003. “A large set of audio features for sound decription (similarity and classification) in the CUIDADO project”. Accessed 8 December 2021. http://recherche.ircam.fr/anasyn/peeters/ARTICLES/Peeters_2003_cuidadoaudiofeatures.pdf
Roads, Curtis. 1998. Computer Music Tutorial, MIT Press.
Roads, Curtis. 2001. Microsound, MIT Press.
Sanfilippo, Dario. 2019. "Generative Audio Systems: Musical Applications of Time-Varying Feedback Networks and Computational Aesthetics". PhD thesis, University of Edinburgh.
Sanfilippo, Dario, and Di Scipio, Agostino. 2017. “Environment-Mediated Coupling of Autonomous Sound-Generating Systems in Live Performance: An Overview of the Machine Milieu Project”, in Proceedings of the 14th Sound and Music Computing Conference, Espoo, Finland.
Sanfilippo, Dario, and Valle, Andrea, Valle. 2013. “Feedback Systems: an Analytical Framework”, Computer Music Journal, 37 (2).
Surges, Gregory. 2015. Generative Audio Systems: Musical Applications of Time-Varying Feedback Networks and Computational Aesthetics. PhD thesis, University of California San Diego, 2015.
Van Eck, Cathy. 2017, Between Air and Electricity, Bloomsbury, 2017
Verfaille, Vincent. 2003. Effets audionumériques adaptatifs: théorie, mise en oeuvre et usage en création musicale numérique, PhD Dissertation Université de Marseille.
von Foerster, Heinz. 2003. Understanding Understanding. Essays on Cybernetics and Cognition, Springer.
Waters, Simon. 2007. “Performance ecosystems. Ecological approaches to musical interaction”. Proceedings of the symposium Electroacoustic Music Studies Network, Leicester.
Waters, Simon, ed. 201. "Performance Ecosystemics." Special issue, Organised Sound 16(2).
Waters, Simon 2021. “The entanglements which make instruments musical: Rediscovering sociality”. Journal of New Music Research 50(2).
van Waterschoot, Toon, and Moonen, Marc. 2011. “Fifty Years of Acoustic Feedback Control: State of the Art and Future Challenges”, Proceedings IEEE 99 (2).
Widmer, Gerhard, Simon Dixon, Peter Knees, Elias Pampalk, and Tim Pohle. 2005. “From Sound to Sense via Feature Extraction and Machine Learning: Deriving High-Level Descriptors for Characterising Music”, in Sound to Sense. A State of the Art in Sound and Music Computing. Editetd by P. Polotti and D. Rocchesso, Logos Verlag.