At its core, Sinew0od could be described as a modular system where the player becomes part of a controlled feedback network. It is also, as proposed by Eldridge et al. , a self-resonating vibrotactile feedback instrument (SRI) that is “intimately sensitive to physical interaction by the player”  wherein the feedback is “the primary sound producing mechanism; the properties of the resonant materials colour the vibrations, influencing the resultant acoustic properties of the instrument” . Navigating the system, both physically by moving around inside the network, and sonically by using different playing techniques, the player modulates parameters of other modules, which in turn alters the affordances of the instrument module as well as the system as a whole. The piece has a traditionally written score, describing instrumental actions along a timeline, and incorporates a traditional musical instrument. However, the affordances of the modular system itself—with all its components and agencies—is equally important for the identity of the system. As a musical instrument it is “an epistemic tool: a designed tool with such a high degree of symbolic pertinence that it becomes a system of knowledge and thinking in its own terms” . In this case there are no clear boundaries between neither player and instrument, nor system and piece.
Akin to both the hyperinstrument paradigm as described by Machover , cybernetics (see e.g. ,  and ) and the discourse within modular synthesis that a patch can be inseparable from a composition (see e.g. ,  and ), the studied piece shows how musical instruments can be radically altered by such systems. Besides the obvious timbral aspects, the system demands adaptation and new playing techniques to be explored by the performer.
Treating the piece Sinew0od  as a modular system and transcribing it for bass clarinet, effectively involves replacing the most complex module– i.e. the human-instrument entity. Consequently, the aim of this study is to gain a deeper understanding of the agencies within the system. By scrutinizing the adaptation process required by the musician performing the piece and also how the non-human agents in this specific system respond to transcription, another aim is to better understand the modular aspects of the feedback network. Are adjustments and rearrangements required in this new setting, and if so, why? How and why are other modules and their respective agencies affected by the new clarinet module?
This paper addresses these questions by describing the design of the two feedback systems and why adaptations were necessary in the transcription. A comparative qualitative study of the two versions of the piece was conducted. The study was built on video and audio recordings from workshops, and on interviews with the two performers. The methods used in this study will be further described in the Design of the study section below.
Traditionally, within western art music, transcription of music has been closely related to the translation of texts. In both cases an act of interpretation is required. However, there are also major differences, partly related to the temporal nature of music. For composers like Luciano Berio, and other practitioners of score-based compositional practices, the act of transcription—whether it is a simple form of copying or a more complex transformation, as in the case of Webern’s intricate transcription of Schönbergs Chamber Symphony op.9 to piano quintet—always adds something through the act of interpretation. In Berio’s own words, “[t]ranscription seems to get drawn into the very core of the formative process, taking joint and full responsibility for the structural definition of the work. It is not the sound that is being transcribed, therefore, but the idea”.  But if the composer’s idea is not primarily represented by the notation, but is embodied in the resulting sound, or even, in cases where the composition to be transcribed has no notation, as in the case of Richard Karpen’s Strandlines (2008), the making of a transcription becomes a wholly different matter. When reflecting on such a process of transcription,  suggests that the creation of Elliptic (transcribed from Strandlines by Karpen, the JACK Quartet and The Six Tones), due to its foundation in performative practices and resulting sound “was created by re-enacting the original working process” [12, pp. 56]. Finally, to further widen the notion of musical transcription beyond the field of score-based composition, reference to other related practices like the remix culture and recomposition movement, sonification studies, and functional translation theory , wherein recontextualization is a central feature, we may find models for thinking of transcription as making something pre-existent comprehensible and relevant in a new situation.
In essence, most musical instruments incorporate feedback in some capacity. One example is the standing waves bouncing back and forth, reinforcing themselves inside a tube-shaped instrument, excited and modulated by the embouchure of the player . Another example is how the musician adapts to room acoustics , and this implies that a traditional instrument already extends outside of its body as a part of a larger feedback system.
There are several examples of instruments where feedback becomes a more active and enhanced agent in the system. One of the most well-known is the electric guitar, where the pickup’s position in space in relation to the speaker cone of the amplifier is a vital affordance of the instrument. The so-called Larsen effect , where a pickup is fed into a speaker and the level is turned up until a positive feedback loop happens and the system starts to self-oscillate, has been exploited by musicians like Jimi Hendrix, Sonic Youth and others. Other famous explorations of feedback systems in music include Alvin Lucier’s I am sitting in a room , Steve Reich’s Pendulum Music  and several works by David Tudor (see e.g. ). In a recent example the addition of a DSP controlled feedback system to a trombone is explored by Snyder, Erramilli and Mulshine . Rather than trying to augment the acoustic trombone their focus of interest was on “how the resonances of the trombone would shape the feedback sound” . Incorporating both analog and digital technologies, the aim was to create a predictable feedback system with controllable pitch, utilizing a compressor and a bandpass filter in the feedback chain. The filter’s frequency is controlled by means of a sensor detecting the position of the trombone’s slide. Presented as a work in progress, the paper expresses that feedback is not easily tamed, and that there are many parameters to control if stability in terms of pitch, dynamics and timbre is to be achieved.
The concept of no-input mixing (see e.g. ) is also interesting in regards to this study, not only because it was used during the early Sinew0od workshops (further described below) but also because it resembles a performance ecosystem as described by Simon Waters (e.g. ). Here, direct connections between the mixer’s outputs (e.g. auxiliaries, monitor and headphone outputs) are made, effectively creating oscillations within the system by means of feedback. The user interface, sonic affordances and the relative lack of reproducibility of such instruments resembles a small modular synthesizer. Interestingly, due to nonlinearities of cheaper circuits, there is a correlation between lower quality mixers and wider timbral ranges. Used as a module in a larger system a no-input mixer allows for a sharing of its affordances.
Philip Alperson uses the “commonsense view [that] musical instruments are devices that performers use to make music” [22, pp. 38], as a starting point in his essay The Instrumentality of Music but quickly moves on to show that those devices cannot simply be thought of as pure material objects. The complex network of relationships that a musical instrument is comprised of is of course also part of the musicking as a whole, thoroughly scrutinized by Christopher Small . Everything, from the luthier’s carefully selected pieces of wood to the timbral aesthetics, possibly with certain existing pieces and types of musicking in mind, are to be taken into account. Or, as Thor Magnusson formulates it in his book Sonic Writing: “The musical theory of each musical culture is written into the functional body of the instrument itself. The instrument is concretised music theory.” [24, pp. 5].
When discussing the ontology of musical instruments we could also consider applying Bergson’s ideas of knowledge as being either relative, where we study the object from an outside angle, or absolute where “we enter into it” . Alperson’s notion of the instruments as embodied entities could indeed be thought of as objects entered into. Furthermore, this means that a device made with an intention for future musicking is not enough to consider it a musical instrument. We also need the transformation that occurs when someone actually plays it in a specific musicking context. In this moment the material object seizes to be a device and becomes an actual instrument. This often becomes obvious when comparing different performers’ live electronic setups. A unique set of hardware and software modules, interconnected in a very specific way and intended to be performed on by one and the same player only, might not even make sense as an instrument to another musician.
Tying uniquely defined setups, including attitudes to musical instrumentality, to specific musicking situations (e.g. players and pieces) is of course not exclusive to the realm of live electronics. The composer Helmuth Lachenmann referred to his compositions as "musique concrète instrumentale” , and as a way of transforming Pierre Schaeffer’s idea of musique concrète to the western classical instruments. He says:
[T]he sound events are chosen and organized so that the manner in which they are generated is at least as important as the resultant acoustic qualities themselves. Consequently, those qualities, such as timbre, volume, etc., do not produce sounds for their own sake, but describe or denote the concrete situation: listening, you hear the conditions under which a sound- or noise-action is carried out, you hear what materials and energies are involved and what resistance is encountered. 
Indeed, Schaeffer states that ”[e]very device that makes it possible to obtain a varied collection of sound objects—or of varying sound objects—while keeping us aware of the permanence of a cause, is a musical instrument, in the traditional sense of an experience common to all civilizations.” 
Building further upon Philip Alperson’s philosophical organology , Deniz Peters  observes how instrumentality can be distributed among several performers that “may together form a single voice” . This new voice could then “acquire its own instrumental agency” . Utilizing contact microphones and transducer speakers attached to the instruments bodies, together with fishing wires tying Peter’s piano and Bennett Hogg’s violin strings together with Sabine Vogel’s flutes, he describes an intricate network, stimulated by both human and non-human agencies. The closed, sensitive circuit between objects, electronics and players described here could be understood as a modular system where the different modules in the patch inform and extend each other. In such systems the patch itself should be regarded as the most essential part, since this is what unlocks the agencies of the respective modules.
On the Internet forum Modwiggler.com there are several threads discussing modular synthesizers and patching techniques deriving from the field of cybernetics (see e.g. ,  and ). In the thread “Cybernetics and AI with Serge'' , started in September 2020, the user ‘mfaraday’ posted a series of tutorials on his Youtube channel La Synthèse Humaine  on how to work with these concepts, using a small Serge modular system . The channel contains musician Gunnar Haslam’s explorations of simple feedback patches as a basis for cybernetic systems. Introducing the concept of neurons to build neural networks within the modular synth, Haslam points out that the goal here is to explore a cybernetic system, rather than to create something predictable, not far from the aims with the complex dynamically mapped, instruments and algorithms described by Palle Dahlstedt (e.g.  and ) and the Dirty Electronics as practiced by John Richards (e.g.  and ). Human agency is crucial as input to the systems in all these cases, or using Haslam’s words “[try not] to use the most efficient way to use this, but to use your humanity to guide you”.
One way to conclude the above reasoning would be that all musical instruments inhabit a set of shared human and non-human agencies, enabled by the activity of musicking. The Sinew0od system is an attempt to manifest these qualities.
To gain a deeper understanding of the agencies within the Sinew0od system and how they affect the musicking from the musicians perspective, a series of filmed workshops with discussions and interviews were conducted. For the purpose of this study and to enable comparison between the first version for Paetzold contrabass recorder  with the transcribed version for bass clarinet , a series of such sessions with Anna Petrini were conducted by Petersson in December 2019 at the Royal College of Music in Stockholm (KMH). These workshops comprised fine-tuning of the system and a soundcheck, similar to the preparations needed before a concert, followed by a couple of run-throughs and discussions. On December 19, 2019, an interview with Petrini  was made. In parallel, filmed workshops, a presentation and a laboratory demo of the system with Robert Ek were made as part of the transcription process. For Petrini this amounted to a rediscovery of a piece she had performed many times and knew well, while for Ek these early workshops were part of his initial explorations of the system. Several additional meetings and rehearsals with Ek were held during the transcription process until the first performance of the clarinet version on April 26, 2021 . To enable a more fine-grained comparison between the two musicians' experiences of working with the system, an additional filmed workshop and interview with Ek was conducted on January 3, 2022 .
The entire collected data was used for a comparative, observational analysis of both audio and video, in order to obtain a basic understanding of similarities and differences of the affordances of the two interactive systems.
The first version of Sinew0od was composed for Paetzold contrabass recorder in 2008 and premiered by Anna Petrini at Casa da Musica in Porto, Portugal, in December the same year. Due to an intermittent bug in the used audio interface, discovered during the, much too short, line check just before the concert, the system did not work as intended during this performance. Even if this obviously was considered a failure at the time, several important lessons were learned about the fragility and complexity of the system. Despite Petrini being hesitant to perform inside this, apparently very risky, performance eco system again after this initial experience, other more successful performances were made after this, and in 2011 it was recorded for her album Crepusculo .
The composition process involved several workshops, and started out with experiments, attaching small speakers and microphones to the Paetzold, and creating feedback through a small analog mixer. Different speakers were tested during the process, and due to the inherent loudness of electroacoustic feedback, a decision to use two small speakers, incapable of high sound pressure levels, was made. The cheap, battery powered Philips SBA-1500 portable speakers (shown in fig. 2.) turned out to color the sound in interesting ways when overdriven (which they of course always were due to the feedback). The form factor of these speakers also allowed for both the attachment of a microphone stand clamp and used in a hand held fashion. After some experiments with different positions, one speaker was mounted on a stand, in the air, pointing towards the labium of the Paetzold, approximately 40 cm from the instrument’s default position during playing. The other speaker was placed close to the floor, taped to the bottom hole of the instrument with a small gap to allow the air to pass through (see fig. 2.).
A small analog mixer was used to facilitate microphone pre-amplification as well as level control for the speakers. To avoid the most piercing frequencies in the feedback, the mixer’s built-in three-band equalizer was utilized to cut out the highs and boost the lows. Further, trying to tune the feedback, and make it more predictable, some internal feedback connections in the style of no-input mixing were tested. By carefully adjusting the equalizer and level knobs, a fundamental pitch could be set that forced the system to stay within a more specific “key”. While this allowed for some stability during that specific session, due to the nature of feedback, the non-linearities hidden in the speakers, the mixer, microphone and speaker placement and several other parameters affecting it, it turned out to be very difficult to reproduce with precision for the next. The no-input mixer approach was abandoned and instead, to facilitate reproducibility a computer running Max/MSP was used to mix drones into the system, thus stabilizing it to a certain extent.
The Max/MSP patch sends the output of two independent oscillators as drones on discrete channels, feeding the air and floor speakers respectively. Simple FM with cross modulation between the two oscillators was used for the synthesis part, and a MIDI switch pedal facilitated stepping through different pitches as well as modulation indices in a predetermined list. The two oscillator’s frequencies follow the cantus firmus used in the score (see fig. 3.), both in an original and retrograde fashion, however the floor speaker drone is not always active. A volume pedal between the sound card and the mixer allowed for controlling the level of these drones. In practice, the volume pedal worked in conjunction with the microphone’s distance to the air speaker, as a way to play with the chaotic properties of the system. Hence, fading in the drone basically causes a transition from more to less chaos in the system. The Max/MSP patch also inhabits a pre-recorded drone composed of samples of the composer’s breathing and spectral convolutions between different Paetzold recordings made with Petrini during the workshops. These sounds are routed directly to the main PA system, and gets more intense with each press on the switch pedal. The artistic idea was to provide an inside-outside perspective on the instrument (i.e. the SRI), and to, since the system should also be amplified into the same PA, create a sense that the instrument grows during the piece and eventually fills the entire room.
During the early workshops, Petrini and Petersson also explored different playing techniques on the Paetzold. The most spectacular finding was of course how the self-resonating air column enabled a whole new range of sounds from the instrument. Pressing the keys without blowing allowed for pitch changes, and leaning towards or away from the air speaker could bend the notes. Different blowing techniques could then modulate the frequency spectrum of these notes. Some notes were also choked by the feedback, but this could often, but not always, be balanced with a slight change in the volume pedal, changing position in relation to the air speaker or a change in the level on the mixer. The latter would be difficult to adjust during the performance because the player needs to keep her hands on the instrument. The initial gains, eq and level settings therefore needed careful consideration during soundcheck, and the only level to be touched during the finished version of the piece is the main out. Hence, the piece starts with the volume pedal at maximum and the mixers main out level at minimum. By slowly turning up the main level, effectively fading in the drone, until the feedback starts to get noticeable and cause a certain kind of distortion, memorized by ear, the piece can move on.
Multiphonics turned out to have a particularly interesting effect on the feedback, with some similarities to the effect of the volume pedal. The harmonic content and the loudness of the multiphonics seemed to have a controlling effect on the system. Four effective and easily reproducible multiphonics were selected and the tonal material for the entire score was derived from their fundamentals. With these multiphonics it was also possible to gradually fade the full chord in or out from or to their respective fundamentals. Petrini also discovered that when playing regular notes her intonation in regards to the pitch of the feedback could cause unexpected timbral beatings. Agreeing on those as an interesting affordance of the system, a variable-speed vibrato was added to the score as an ad libitum element.
The structure of the score goes through a process from silently pressing keys to alter the feedback to playing the same notes in a regular fashion. During the piece, the process is articulated by the previously mentioned multiphonics and varying “airiness” in the blowing.
At one point during the workshops, Petrini instinctively tried to silence a particularly uncomfortable and piercing frequency by covering the microphone with her hand. We then found that the system could be drastically altered by doing this, and the position of the hand as well as the angle and distance to the speaker were found to be an alternative way of interacting within the system. These findings were incorporated into the score as fermata bars, interrupting the previously described process at seven points where the last one is at the end of the piece.
The fundamental parts of the system described above, including the basic formal structure of the score, were also used during the transcription process, although several important changes and adaptations had to be made for the bass clarinet version.
The transcription process consisted of several workshops and rehearsal sessions, followed by a first public performance at the Royal College of Music on April 26, 2021 . As a starting point the bass clarinet was chosen, being close in size and register to the Paetzold. The same small speakers (Philips SBA-1500) were used. Two different microphones, a DPA-4099 for its sound quality and a Rumberger K1 were tested, one at a time and together. The latter is a piezo-microphone that is mounted inside the instrument both on the Bb clarinet and the bass clarinet.
Unsatisfied with the lack of response in the system we decided to switch to the Bb clarinet instead, using the same setup and both microphones. Mounting both speakers on a stand and the DPA on the bell made it possible to more effectively interact within the feedback field (see fig. 4). The initial focus was to make the transcribed version as close to the original as possible in terms of musical functions and how the basic elements were organized. As an example, we struggled to find multiphonics with similar qualities as the original to no avail. With the Bb clarinet we quickly realized we had an unbalanced system where the clarinet too easily could out-power the different parts of the system where e.g. a soft note could cause a complete cancellation of the feedback.
A more traversable path was found when returning to the bass clarinet, this time putting the DPA inside the neck of the instrument using the mount intended for the K1 microphone. The high sensitivity of the DPA resulted in a much more responsive system, similar to the original in behavior, yet unique in timbre. The air speaker was moved close to the microphone and the floor speaker was mounted directly on the bell to facilitate as much feedback as possible. Exploring this system, we found that the volume pedal’s effect was insignificant compared to e.g. the proximity to the air speaker and other parameters and it was eventually removed from the system. Investigating the register to find the most responsive pitches, especially the silent key presses, resulted in a transposition of the score a minor third up. In addition to this, the same MAX patch was used but here the drones were transposed a major sixth down, which due to the frequency response of the small speakers had quite a radical effect.
Selected parts of the filmed interviews with Petrini  and Ek  are presented below, edited together for comprehensibility. Selections were made thematically, based on what we agreed on as being the most significant features of the piece. Those features gradually stood out as important during the workshops with Petersson and Ek, and laid the foundation for a comparative analysis carried out by the two authors. The basis for both interviews and discussions were how Petrini and Ek related to their respective instruments within the Sinew0od system, their experiences of performing as part of it and their embodied knowledge and understanding of the system as a whole. During the interviews they both stress that the affordances of their instruments change within this system. Anna says that “the instrument works in a different way. The prevailing rules are not there anymore. Therefore, you need to start by [re]mapping the instrument” . Further, she exemplifies with the most obvious fact that the instrument makes a sound all the time, even without blowing and how she can temporarily cancel out the feedback by playing certain notes, but as soon as she stops it returns. Similar observations are made by Ek, but seemingly, due to the physical properties of the clarinet, there is a much longer recovery time for the feedback, almost as if the feedback has to take a breath before it can start again.
The particular affordances of power and recovery time of the bass clarinet, found within the feedback system, seem to constitute the main differences between the original version and the transcription. Petrini, on the contrary, notes a new kind of resistance to overcome within the instrument, greater than usual. She says: “I need to go inside and disturb those vibrations within the instrument, and I can feel them physically.” . The constant sound of the instrument also leads to a different experience of breathing. Usually, breathing corresponds to natural pauses in a piece, but in this case, even though there are rests in the score there will not be silence at those points due to the feedback. “If I take a breath, I leave a gap for the feedback” , Petrini states. This fact also led to a discussion around notation and how rests as well as most other symbols in this score actually do not tell much about how it actually sounds. The score for Sinew0od  should thus be regarded as a hybrid of a set of instructions for actions to perform within the system and a description of the musical results to aim for. While this is also true for the transcribed version, the bass clarinet, as a much more powerful agent in the feedback circuit. Here, "the feedback gets canceled out by the clarinet" . Hence, playing the score as written, and with the system unaltered, yielded less interesting results and the formerly complex feedback agent became more of a passive distortion box.
Besides transposing the score to the most effective range for the new instrument, many of the original's articulations, dynamics and suggestions for interpretation had to be changed by Ek. As mentioned earlier, a custom bass clarinet neck with a hole intended for a specific microphone (the Rumsberger K1) was used. The construction demands the hole to be completely sealed for the instrument to work in the expected way. Otherwise, the leakage makes regular notes impossible to play. Since we used it with a DPA microphone instead of the original piezo, we used adhesive tape to cover the hole. As an unexpected feature, we discovered that this tape could be used to compensate for the instrument’s previously mentioned ability to outpower the feedback. By leaving a tiny bit open, the tape could be used to balance the system, and adjust the effective range of playability within it, by significantly offsetting the threshold for when the clarinet became a too powerful agent in the feedback network.
The two instruments behave differently in regards to how the feedback affects their resonating bodies. In all flutes the sound is derived from the oscillation of an air column inside it. The movement of the air particles is induced by the air blown across the edge of the labium. This movement propagates across the tube down to the open end at the foot joint where the movement is reflected. There is no airflow inside the flute, the only moving thing inside the tube is the pressure difference and the blown air is only activating the vibration of the air . Applying a feedback system to such an instrument disturbs the oscillating air-column, making playing in a traditional way virtually impossible. This was the reason for adding a volume pedal to the Paetzold setup; enabling adjustment of the amount of feedback modulation injected into the resonant body of the instrument. In a clarinet, on the other hand, one of the ends is closed and the reed vibrations make the air column oscillate. This aspect, in combination with dynamic differences, seems to be the reason why the clarinet remains a clarinet despite the feedback, and easily kills it, while the Paetzold instead becomes the weaker part of the system. In discussing how the system works technically, Petrini describes how the volume pedal was not very intuitive to start with and something she “had to get acquainted with” . In the original version this pedal functioned as an attenuator controlling the amount of the steering drone of the Max patch that was mixed into the feedback. As part of the composition, the drone is never completely silent during the piece and its dynamics follows the multiphonics. During the experimental session within the transcription process, the volume pedal was placed in different positions of the signal chain. At one point it was accidentally connected on the master out of the mixer instead of on the output of the Max patch. While this obviously allowed for a more radical control of the feedback system (i.e. from feedback to no feedback at all) it also made Petersson and Ek realize that it had quite a small effect on the system as a whole when reinserted in its intended position. The clarinet behaved similarly independent of the volume of the mixed-in drone as long as it was at all apparent in the mix. For example, the above mentioned tape on the neck microphone was a much more significant parameter than this pedal. Hence, a decision to remove it from this version of the system was made.
To illustrate how her instrument works completely differently with and without the feedback, Anna plays one of the multiphonics as an example. Fading in the drone from the Max patch (using the volume pedal) makes the difference smaller. On the Paetzold contrabass recorder there are not so many effective (and easily reproducible) multiphonics to work with. The four selected ones for the original piece were the most easily achievable on Petrini's instrument. Since those formed the basis for the cantus firmus (see fig. 3) that the whole piece was based on, Petersson and Ek initially tried to find similar multiphonics on the bass clarinet. As opposed to the Paetzold, the clarinet has quite a large repertoire of multiphonics that are more easily achieved. However, they also sound quite different, usually with a much clearer tonality with a strong fundamental and only one dominant overtone. This in combination with the aforementioned ability to outpower the feedback led to less effective results in the system. Hence, a functional approach to the transcription was used regarding these multiphonics. The two musical gestures of the original were swells from a fundamental pitch to a complex chord or from full chord back to the fundamental. Effectively these were timbral crescendi or diminuendi distorting the partials in the feedback sound. In the bass clarinet transcription those gestures were replaced by the physical gesture of leaning towards or away from the air speaker. Even though the sonic qualities were quite different between the two versions, the musical function had convincing similarities. It also added a gestural, embodied quality that seemed idiomatic to the bass clarinet.
While talking about the speaker positions it is clear that the air speaker has a more active role in both performers' minds. In the case of the Paetzold it is also more of the case technically. Here, the lower speaker has a less significant effect on the behavior of the system as a whole than in the clarinet version. It was left in the system rather because of the spatial effect occurring when the Max patch drone moves between the different speakers than for its actual effect on the feedback. On the bass clarinet the lower speaker was mounted as an active mute pointing straight into the bell. The closed tube and the microphone mounted inside the neck enabled a much more active agency in the system. Not only could the lower speaker drone steer the feedback in an equal way as the air speaker, the drone was also bi-directionally colored by the tube resonance.
Describing the fermata bars in the score, Petrini explains how she can play with both the distance to the speaker and with fine movements in the hand that is cupped around the microphone. She states: “here, i don’t really feel like i’m in control” , but agrees that it is predictable on an overarching level and that it feels like a little improvisation with the system as a co-player. Since the microphone is already inside the instrument in the transcribed version, covering it with the hand did not have any significant effect on the feedback. Instead Ek discovered that closing the tube entirely by pressing all keys caused an unexpected noise. We consider this as another functional transcription of the original's action of covering the microphone.
In this project we have explored how a complex modular feedback system evolves and how its affordances change through transcription. It is clear that any musician bringing their instruments and embodied knowledge into such a system needs to adapt, not only their instrumental technique but also, they must adjust to a performance situation which questions where the instrument begins and ends. The shared instrumentality within the Sinew0od system enables new affordances for the musician to explore, but, just as observed by Berio above, the transcription itself indeed “seems to get drawn into the very core of the formative process” , since the new instrument also changes the behavior of the non-human agents in the system. During the transcription process the feedback network’s agency forced the performer to gain new embodied knowledge and to re-evaluate his usual interactions with the instrument in order to transcribe the idea and behavior of the self-resonating instrument rather than the mere sonic qualities of it. This goes well beyond the traditional notion of musical interpretation and could possibly lead to an even deeper understanding of some of the instrument's previously hidden qualities.
The alteration of the bass clarinet, as described in this paper, changes its affordances, forcing the musician to become aware of new ways of interacting with the instrument. During performance, an expert instrumentalist is rarely aware of their instrument and how they interact with it. It is ready-at-hand , giving more freedom to focus on making music. This long process of learning an instrument is strongly connected to the cultural history of the instrument and its place, in this case, in the western classical tradition. And although sought after, this state sometimes prohibits an intellectual process of rethinking how we interact with our instrument and the discovery of new affordances. To describe this phenomenon Jonathan De Souza refers to Heidegger: ”But sometimes my hammer breaks. I stop. I look at the tool. Suddenly this thing demands my attention. Instead of being handy (zuhanden), the broken hammer is “present-to-hand” (vorhanden).”  An altered instrument or the use of unconventional playing techniques pushes the performer to invent a new musical language that embraces the entire sound world.
The Sinew0od system itself could of course be understood both in a more traditional hyper-instrumental way , where the instrument is extended with a feedback system, or, (maybe less traditional), where the feedback system is extended (or modulated by) an instrument. However, we argue that in order to analyze the Sinew0od system a modular and cybernetic approach is better suited to understand the human and non-human signal flows. Using the analogy of a modular synthesizer system and the feedback circuit as a patch, helps to uncover the most interesting and defining properties of this piece: how the different modules inform and create affordances for each other, and why altering the patch would change the foundation for the piece. The process of transcription has rather involved a strive for finding similarities in the behavior of the two studied systems than a note by note representation of the original version. As an interactive system and a composed piece, the system needs a certain kind of balance between the predictable and the unpredictable, and it is in this balance we find the core of the work. Without it, it can of course serve as an interesting ecosystem for improvisation, but if we aim for reproducibility in terms of timbral, harmonic, melodic and rhythmic structures, this is a requirement.
When dealing with SRI’s, Eldridge et al. “find[s] a commonly reported theme of having to dialogue with the instrument”  and how this is related to the blurry lines between composers and performers, instruments, pieces and lutherie. Gorton and Östersjö  similarly describe this as a process in which a negotiation to enable the formation of voice takes place. In the Sinew0od system the instrumentality is shared among its modules and the negotiation happens between both human and non-human agents and it is not clear where the system ends and the composition begins.
The Sinew0od system has indeed opened up for further sonic experimentation ever since its origin in 2008. The transcription described in this article has led to an exploration of the system as a modular abstraction, to be further explored in the future. Fig. 6 shows a draft of a generalized model of it as it stands.
As with all new knowledge this newfound embodied understanding discovered by the performers has the potential to influence the interpretation and adaptation of future works. Especially music that contains active live electronics where the human-instrument entity is one part in a system that responds and reacts to the performer's intentions.
So far this model has been tested as a digital musical instrument, using simple physical models for tube and string instruments in combination with gestural controllers and voice and synthesizers as exciters. By feeding the resonance of these models back into themselves, colored by a virtual speaker in a virtual room by means of convolution, similar, or at least interesting, behavior can be studied. The possibilities to create new works using this framework seems promising. Experiments with a sensor-fitted clarinet bell [44, 45] as both gestural input for parametric control and exciter for the physical models are also in preparation.
The authors would like to thank Anna Petrini for her valuable time and sharing of profound knowledge both about music interpretation in general and about this piece in particular.