In our artistic research project “Spirits in Complexity – Making kin with experimental music systems”1, we explore the functional and affective relationships of humans and technological objects in the context of experimental music-making – involving acoustic instruments, but also other technical devices such as recording equipment, synthesizers and even software. These working relationships between human and non-human actors can take the form of a kinship relationship, but can also be of a negotiating or even confrontational nature, or take on ritual and spiritual forms. We use the titular term ‘spirits’ in a metaphorical sense to refer to an opaque complexity that characterizes human and non-human interaction in the artistic-creative context. Furthermore, the ‘spirits‘ stand for the self-will of complex dynamic systems that resist purposeful control and demand attentiveness and a kind of respect from people in order to make interaction possible in the first place.
In contemporary music-making, objects of artistic practice are more often micro-mechanical/electronic/algorithmic technologies than handmade materials. Their complexity often exceeds the level at which technology is understood by most people in its full depth – in other words, we are dealing with black box systems. This is particularly evident in technologies based on machine learning (AI), which currently relies on so-called deep neural networks loosely modeled on the brain. As a consequence these models are also as opaque as the brain, diffusing whatever information they have learned from input data in a way that is exceedingly difficult to decipher (Castelvecchi 2016). A wide variety of explanation methods have been developed in order to explain the inner workings of deep neural networks, usually by approximating the full complex system with simpler, often linear, alternatives. However, the veracity of such explanations is still in doubt, especially also for AI models trained on music (Praher et al. 2021; Hoedt et al. 2023).
For the context of our artistic work it is fruitful to review earlier work in cybernetics, which can be seen as the predecessor of AI. The cybernetics pioneer Heinz von Foerster (Von Foerster 2003) proposed a distinction into trivial machines, that allow unambiguous prediction of their output from their input, and non-trivial machines where this is not possible. Please note that the important aspect is how the machine appears to a person interacting with it, not so much how it is actually built and functioning. This concept has already been applied to music-making (Grüny 2022), showing that even a ‘no-input-mixer’, directly connecting its input to its output, is not fully controllable due to self-enforcing interferences inside the mixer. Black box instruments can generally be seen as non-trivial machines, no matter whether they are analog devices, physical modelling systems or AI-driven synthesis algorithms. Not all of the instruments used in our performance are truly black, though – various shades of gray can be noticed.
Non-trivial machines are also characterized by the fact that it is not possible to deal with them in an exclusively functional way during a performance. They are non-transparent and their inner workings are unavailable. They generally do not behave as one would expect or anticipate and they force us as performers to engage with them, to be affected and to be open to entering into resonance. Resonance describes response relationships between two or more entities (both people and things such as technological artifacts) with a specific quality. Resonance occurs when the “entities of the relationship touch each other in a vibrating medium (or resonance space) in such a way that they can be understood as responding to each other, but at the same time also speaking with their own voice, i.e., as ‘sounding back’” (Rosa 2016, 285, translated by the authors).
The prerequisite for being able to build a successful or resonant relationship with non-trivial instruments is openness by performing an active resonance orientation or attitude. This means engaging in a risky process in which we cannot say exactly whether it will succeed, what the outcome will be. This process is always open-ended. A responsive resonance relationship is therefore not a linear one, but a complex reciprocal relationship and consequently also implies a potential moment of alienation or failure. Rosa identifies a further characteristic that is a prerequisite for resonance in unavailability (‘Unverfügbarkeit’): “Resonance requires an unavailability that ‘speaks’; it is more than just contingency” (Rosa 2019, 56, translated by the authors): “Things that we have at our complete disposal lose their resonant quality. Resonance therefore implies semi-availability” (‘Halbverfügbarkeit’) (Rosa 2019, 52). A complete unavailability in dealing with a non-trivial instrument, i.e., the instrument eludes the performer on all levels or the instrument is exclusively erratic or arbitrary, thus also does not allow any resonance relationship. Resonance relationships are characterized by constitutive un- or semi-availabilities. On the one hand, resonance cannot be forced; on the other hand, it is also impossible to predict what the result of a resonance process and the associated transformation of the participants will be.
Unavailability does not simply mean contingency, but a qualified unavailability; the non-trivial instrument therefore has a responsive character in the sense that it challenges and invokes us. This responsive or affirmative moment can become apparent in a Black Box Music performance. We understand every such performance as an encounter with human and non-human others, not as a functional, instrumental unidirectional echo relationship but rather as a performative relation of making kin. Contrary to Rosa’s view (Pfleiderer and Rosa 2020, 19), we perceive non-trivial machines or complex technological music instruments as partners with a life of their own.
At the Speculative Sound Synthesis Symposium, we performed a musical improvisation of 25 minutes duration involving black box electronic music systems that have each been created/selected by one team member, but were played by another team member, unaware of the inner workings.
The technologies involved as non-human partners were vintage analog systems, physical modelling systems and AI-driven synthesis based on a sample corpus. The sound aesthetics of all four technologies more or less drew on sketchy/noisy AM/FM radio reception in a speculative extrapolation of Thomas A. Edison’s alleged ‘Spirit Phone‘ (see Lautour 2015).
The improvised performance was not at first sight. The instruments were introduced to the players before but were not rehearsed in order to retain the opaque character of the situation. Given the complexity and a certain unpredictable nature of the devices, unknown behaviors are likely to be spontaneously exhibited. The fact that one of the participants knew more about the instrument than the actual player gave rise to interesting dynamics of musical anticipation: The creator of an instrument might anticipate what the player of their instrument might do next and try to react to it.
The distribution of instruments followed a certain symmetry: two instruments were rather AI focused and the other two followed a more physical idea. We used this symmetry in the decision of instrument distribution: AI instrument creators would not play an AI instrument. This limited our choices and created two possible creator/player pairings.
To keep the conceptual background intact but at the same time create a situation that was not utterly chaotic or cacophonous aesthetically speaking, we decided to limit the more physical instruments to ideas of analog tape and radio transmission like artifacts. It was not a goal to make interaction purposefully difficult, e.g., by sabotaging players (cf. Dannemann et al. 2023) – in fact, guidelines for instrument design were not rigorously defined.
So far, Black Box Music has been publicly performed three times. At the Speculative Sound Synthesis Symposium2 in September 2024, at the Spirits in Complexity KTH Workshop in November 2024 and at the Spirits in Complexity Symposium at the Intelligent Instruments Lab3 in August 2025.
In this paper, we refer only to the premiere performance at the Speculative Sound Synthesis Symposium which was the initial encounter with the black box instruments still more or less unknown.
We give a condensed account of the instrument designs and the performance experiences of the players, for the purposes of the paper. Longer personal statements of the respective builders and players are presented in the Appendix.
Angélica Castelló’s music system (Instrument 1) consists of a collection of old radios and cassette players, each in a different state of functionality. Their capricious behavior can range from accurately fulfilling their original purposes to producing completely unexpected sounds. This unpredictability invites us to embrace the quirks of the devices, transforming potential technical faults into new sonic opportunities and exploring the boundaries between controlled performance and random noise.
The player of this system, Thomas Grill, had previous experience with similar devices and knew what to expect, in general. The system seemed rather trivial at first sight. However, the half-degraded state of a tube radio and the battery-power of a recorder made those devices unreliable and partly malfunctioning. Also, the unknown nature of the cassettes’ contents necessitated a cautious live exploration of the material. A semi-unavailability of this system can be asserted, responsible for a resonant process – a constant performative probing and sonic result (or its absence), a back and forth between the devices and the player.
Marco Döttlinger’s instrument (2) uses machine learning and machine listening algorithms to navigate a corpus of audio samples. These are played by concatenative synthesis. The system listens to external audio input and – dependent on how the performer wants – it can be used functionally as a performance instrument but could also behave as an autonomous system without the need to interact.
This instrument was played by Patrik Lechner who was already familiar with the form of representation of sounds as a two-dimensional projection of a feature space on the graphical interface. He noticed a gradual shift from unavailability to availability over the duration of a performance by becoming more familiar with the organization of the sounds. This also enabled a change of focus from dealing mainly with the instrument to a more reactive engagement with the other players. Patrik had not touched upon all the features of the system, probably missing out on the more non-trivial aspects.
The apparatus built by Thomas Grill (Instrument 3) is a spin-off from his “dirty spaces” series of works, in which he uses feedback processes that are disrupted with generative AI, among other things. The instrument is based on the RAVE algorithm (Caillon and Esling 2021), an auto-encoder for neural synthesis, and uses human language as a model corpus. It responds to both audio input and interactions with a MIDI controller. The model used was wheel.ts, a vocal model that is downloadable with the nn_tilde4 package for the Max media programming environment. The unlabeled knobs of the controller modulate the latent space of the AI model in an undisclosed way.
Angélica Castelló played this highly non-trivial instrument, noticing a considerable unavailability due to the absence of relatability between the user controls and the sonic output. However, a certain resonance evolved, as Angélica tried – and was successful – to find configurations with particular sonic characters that were distinct from the more common babbling of the machine. She partly felt in control and partly was lost in the complexities.
Finally, Patrik Lechner’s instrument (4) consists of a custom digital simulation of FM radio transmission processes. Exploring novel sound possibilities and leaning towards the ‘physical’ radio aesthetic as well as connecting to bespoke mysticism sometimes found in analog sound transmission.
Marco Döttlinger, who played the instrument, felt an attraction (and availability of the device) due to the clear structure of the graphical user interface. However, the controls of the MIDI interface were intertwined in a way which alienated the player, considerable reducing the availability and making the system non-trivial. The resonance processes of instrument exploration that unfolded throughout the performance led to some regaining of control and also improved the musical interaction with the ensemble.
A Black Box Music performance is a special case of musical improvisation. The conceptual constitution, however, extends non-idiomatic, free improvisation.
Every musical improvisation is an aesthetic confrontation with contingent moments, with unpredictability (Bertinetto 2021): None of the performers involved can see and fully control the outcome or effects of their actions in advance. This is because the respective action consists of interacting with unexpected events: Performers interact with materials, gestures and reactions that escape calculative predetermination, as well as with events that go beyond the horizon of personal control and prediction.
In the case of a Black Box Music performance, this unpredictability is radicalized because it is also placed on the level of the instruments: In this way, it is not only open what a human performer colleague will or will not do. Unavailability is directly inscribed in the non-trivial instruments and thus determines not only the group-interactive or performative-responsive dimension but also the handling of the respective non-trivial instrument.
A first consequence of this arrangement is that the ability to pursue the need for responsive or reactive action while performing is very limited. This means that each performer involved is initially occupied with exploring possible forms of interaction with the individual and unfamiliar instrument. It also means that improvisational forms of behavior acquired through experience (the individual creative repertoire of an improvising musician) are not only not applied, but simply prevented. In other words, a performer never knows what to expect, whether he/she will be able to do justice to the non-trivial instrument or whether he/she will be able to make music intentionally.
Conversely, this speculative setting can lead to the exclusion of clichéd or trained behavioral gestures in playing. Furthermore, they force each performer to listen attentively not only to their colleagues, but also to the characteristics of the instrument. The interaction within the framework of a group improvisation is thus expanded by an additional dimension; not only human colleagues shape the performance, but also the non-trivial instruments and each performer’s ability of making kin with the technological other. This is expressed very clearly by the fact that when listening to the other performers, one can never be sure whether the instrument or the player has the upper hand: it is impossible to determine whether what is heard is the expression of a (human-) performative success or failure or merely the autonomous behavior of the non-trivial instrument, or something in-between.
It was observed that there was a noticeable familiarization throughout the duration of a performance, gradually shaping the ability to interact with the instrument and having an ear for the other players. This also enabled a transition from a purely negative (reactive) to positive (proactive) mode of improvisation, in the terminology of Dehlin (2008).
While an extensive discussion of the choices made in instrument building is beyond the scope of this article, some observations can be made that are interesting in the context of our chosen theoretical backdrop. All instruments had multiple features in common: They had an interactive component. They were designed in a rather approachable way, incorporating ideas that are not entirely unfamiliar to players with a background in electroacoustic music. They all incorporated feedback, and most could even process their acoustic environment, rendering their interplay an interconnected larger system rather than a collection of isolated instruments.
An additional observation we made is that framing a ‘player’ as receiving a new ‘instrument’ almost necessarily evokes a black-box relationship, one that does not heavily depend on the instrument’s complexity or non-triviality. A player is inclined to first ask affordance-related questions like ″How do I play it?″ rather than analytical questions about its inner workings. In accordance with, e.g. (Hardjowirogo 2016), calling an object an ‘instrument’ implies a functionality and suggests a specific form of interaction.
Using the theoretical lenses of von Förster, Rosa, and Grüny, we conducted an experimental concert and a preliminary analysis of the relationships that formed between instrument builders, machines, and players.
This led us to a set of observations, which in turn helped us in an act of theory-building and follow-up on these theories. One of these observations is that our boxes moved through a gradient of grays rather than being purely black. This traversal of non-triviality, we assume, can be expanded upon to further explore Rosa’s concept of resonance.
The question of instrument design as composition was answered in an individual rather than restrictive manner by the four participants. As a result, the respective instruments reflect the interests or aesthetic desires of the designers in a personal and aesthetically idiomatic way. In other words, the generic playability or accessibility of an instrument was not favored. Since all of the artists involved have experience in developing and performing non-trivial instruments, the result are not instruments that are unavailable or arbitrary, completely refusing to be played. Rather, each instrument is an expression of the individual definition of semi-availability, which enables a resonance relationship, a making-kin.
The relational space of these semi-availabilities is renegotiated in each Black Box Music performance, appearing more or less urgent, as the different shades of gray of the instruments meet different expectations on the part of the performers. The formal structure of a performance is significantly shaped by this tension.
In accordance with Grüny’s views on trivial/non-trivial machines, we have so far exchanged instruments as a means of limiting our analytical and predictive capabilities during performance. But this focus on situations of heightened resonance, in combination with our theoretical framework, has already led to multiple approaches that we have begun developing as a logical follow-up:
Connecting back to our research metaphor of spirits, we believe this approach will further reveal interesting aesthetic situations shaped by real-time theory-building about the inner workings of musical instruments and set-ups.
This research is funded by the Austrian Science Fund [10.55776/AR821]. For the purpose of Open Access, the author has applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
The authors would also like to thank the anonymous reviewers for their constructive comments, improving structure and content.
In this section, the personal accounts have been collected – on the one hand on the respective intentions regarding the design of the individual instruments. On the other hand, for each instrument, the respective player of the instrument describes their experience when performing in the public performance at the Speculative Synthesis Symposium.
Angélica Castelló: “My instrument consisted of the following devices:
These objects are what I typically use in my work. They are objects that, for me, have a life of their own, always depending on various factors: electricity, batteries, space (reception), and the interaction between machines depending on their proximity. I’m especially interested in the bodily (electromagnetic) interference between the radios themselves and the bodies (or energies) of both human and non-human performers. When radios are close to each other, they may inadvertently pick up the signal of a neighboring radio, causing ‘cross-talk’, which leads to buzzing, static, or other unusual sounds. Body heat and proximity can affect the radio’s performance in subtle ways. When a person is close to a radio, their body can influence the radio’s circuits (especially if it is an older tube radio or one with sensitive electronics), slightly altering the output or causing an increase in noise. Another interesting (more psychological) aspect is the potential forced passiveness of the human performer, as the devices—radios and tapes—are primarily designed to ‘give us’ sound, ‘deliver the news’, or play our favorite music. However, in this context, these devices acquire their own agency, with their own unpredictable behaviors, wishes, and decisions. The performer must navigate beyond an (unconscious) boundary, disturbing, destroying, or manipulating the sounds, while the radios and tapes act in their own ways, sometimes resisting or guiding the performance. This dynamic creates a tension between the choices of the human and non-human performers, highlighting the friction between creation and destruction.”
Thomas Grill: “I played the electronic instruments provided by Angélica Castelló. With the two portable cassette players/recorders came a selection of compact cassettes with sounds of unknown origin, more of an environmental nature than strictly musical. The devices were familiar to me because of my upbringing in the 1970/80s when this technology was commonplace. My use of such devices became much sparser after about 1990. In this sense, the music system I was confronted with represented a light gray box regarding the possibility space of musical expression. The interaction modes with the devices were known to a great extent, also more unconventional ones like ‘tape scratching’, i.e., pushing play and fast forward or backward buttons at the same time, or exciting the radio receivers through electromagnetic radiation of the other devices. More of a black box nature were the sound materials on the provided cassettes. In order to preserve this factor of uncertainty, the cassettes were not listened to before the actual performance. The devices were amplified to the PA by use of a microphone. The possibility of ‘fading in’ and ‘out’ sounds through the proximity of devices was used in the performance. An interesting breaking point was given by one of the two radios, obviously a tube-based model. It suspended its sound output every once in a while. This could be ‘fixed’ by a firm impact on the casing, upon which the operation usually continued. Also, the battery of one of the tape recorders seemed weak, resulting in a rather sluggish operation of the device, including playback speed instabilities. Apart from this, the instruments did not show that much agency. For the most part, the impact of human interaction was predictable in terms of morphology and dynamics which facilitated the interaction with the other players. On the other hand, the secret contents of the cassettes motivated a search for sonic surprises.”
Marco Döttlinger: “My instrument uses machine learning and machine listening algorithms to navigate and perform a corpus of sounds, it is a sample-based instrument. The dataset consists of sound recordings of colleagues’ instruments and was decomposed into many thousands of slices, then features such as timbre/chroma were extracted. An unsupervised machine learning model (UMAP) visualizes all possible sounds in two dimensions. This map serves as a GUI interface and can be played expressively by the performer with the mouse and a MIDI controller (Performance Mode). In addition, the instrument can listen to external sounds and act autonomously (Agent Mode). Using a microphone next to the performer, everything that sounds is decomposed in the same way, analyzed and projected onto the map as a trajectory. According to undisclosed rules, the autonomously playing agent imitates or counterpoints the sound events of all performers by selecting similar or less similar sounds from the corpus. For the player, it remains opaque when and how fast or dense the agent performs, but the player can set some parameters globally (density, tempo etc.) or force the agent to use only certain areas of the map or work on the weights that influence the choice of slices. The decision to develop a sample-based instrument seemed obvious to me in order to work towards some sort of sonic homogeneity of the overall performance. This means that my instrument can – in both modes – approximate the sound of the others, imitate them and thus lead to fruitful irritations.”
Patrik Lechner: “I played the aforementioned system designed by Marco Döttlinger. It presented itself as a sort of map of sounds that could be accessed through gestures performed on a laptop touch pad. Additionally, Döttlinger provided a MIDI controller that acted as an interface for certain parameters, such as loudness. Since I am familiar with techniques that allow such a mapping of a sound corpus (e.g., audio feature extraction, dimensionality reduction), I rather quickly felt that I had at least an abstract understanding of the system’s capabilities. As an essentially sample-based system, it came with inherent limitations but also offered a rather intuitively controllable broad spectrum of sounds. Moreover, it provided an organization of sounds that I could become familiar with in real time during the performance. As such, I believe the system offers an opportunity to observe a player rapidly traverse the trajectory from unavailability to availability. I felt my attention shifting away from the apparatus toward listening and reacting to my colleagues as I became more familiar with the setup. Islands of predictability emerged on the map of sounds — tempting to cling to but ultimately needing to be abandoned in favor of variety and exploration. In my conversation with Döttlinger afterward, it became clear that I had not been adventurous enough to explore all of the system’s affordances during the performance. I suspect that a more risky constellation, such as a duo or solo performance, would have driven me toward deeper explorations.”
Thomas Grill: “The apparatus I built is a spin-off from my ‘dirty spaces’ series of works, in which I make use of feedback processes that are disrupted with generative AI, among other things. The instrument is based on the RAVE algorithm (Caillon and Esling 2021), an auto-encoder for neural synthesis, and uses human language as a model corpus. It responds to both audio input and interactions with a MIDI controller. The 16 unlabeled knobs modulate the latent space of the AI model in a way not disclosed to the player. I wanted to combine a quite relatable (vocal) sound with a very anonymous interface. The only information I gave was that there are two kinds of knobs, with different functionality. There was also a special combination of knob positions which caused the instrument to be silent, but these positions were difficult to meet. The spectrum of output from the instrument ranged from a (non-understandable) speech character to continuous babbling, to non-vocal, e.g., harsh screeching sounds, likely caused by exceeding the meaningful limits in latent space. It was obvious that the instrument would be difficult to play, above all in an ensemble setting, because the possibilities to purposefully control eminent parameters like volume or density are very limited.”
Angélica Castelló: “I played Thomas Grill’s instrument, a blue pots controller with no information written on it, connected to the computer and sound card (which were not visible). The sound that came out was a synthetic (female) human voice, which I called ‘Blue Wanda’. Turning the pots had no logic at all, neither in terms of sound quality, speed, dynamics, nor attacks. Particularly problematic for making music was the sudden, dense babbling and jabbering of the voice. There were very magical positions (difficult to find) of the pots in which she (Wanda) would become a bit quieter or produce slow, poetic sounds, giving some contrast to her inherent need to say much and nothing. Reaching these moments of control was a great pleasure, and I could feel some reaction in my own body, as if I were sensing a special resistance through my fingers, almost like I was producing the sound myself. Even though the sound production of Wanda seemed very chaotic, I felt that, maybe because she is a voice, I was sometimes in control. But in general, I had the impression that playing with her involved a tumultuous array of microscopic decisions, compromises, dialogues, surprises, and deceptions.”
Patrik Lechner: “My choice of instrument to construct was guided by several factors. However, since I had already developed a program for digitally simulating FM radio transmissions – and one of our members has a long-standing artistic practice with actual radio-based setups – it seemed clear that a confrontation between the real and the simulated would be particularly interesting. Technically, the provided instrument is highly non-trivial, as it has a disproportionately large internal state. Several radio stations with varying connection quality are simulated, meaning that a player is necessarily thrown into a role of searching, listening, and searching again. The fact that radio stations are playing and interfering still provides a sense of predictability–one that I assume I observed being actively exploited by its players.”
Marco Döttlinger: “I played Patrik Lechner’s instrument, a digitally emulated radio station – that much I could tell from the user interface – but with some additional features such as the ability to record received sounds and to loop them. The GUI was very clearly structured and easy to use via mouse interaction. In addition, a MIDI controller was available, which made it possible to operate some central parameters such as frequency and bandwidth with knobs. But the way in which the MIDI controller influenced the instrument was enigmatic and first irritated me: the knobs were obviously intertwined in such a way that the same movement did not lead to the same parameter setting every time. In other words, the controller took on a life of its own, but only from time to time, which was also indicated by the user interface. I interpreted this behavior as an instruction to immediately leave the playing positions previously found and working on and explore other states of the instrument. While playing it became very clear that a functional or instrumental approach was not possible and that I had little influence on what was transmitted by the many interacting virtual radio stations and, above all, when a transmission could happen at all. In the course of the performance, however, I got better and better at working gesturally with the MIDI controller and in this way shaping the sonic result in terms of timing and interaction with my colleagues.”
Bertinetto, Allessando. 2021. “Wozu Improvisation? Ästhetische Kategorien des Unvorhersehbaren.” In Die Kunst und die Künste, edited by Georg Bertram, Daniel M. Feige, and Stefan Deines, 442–63. Berlin, Deutschland: Suhrkamp.
Caillon, Antoine, and Philippe Esling. 2021. “RAVE: A Variational Autoencoder for Fast and High-Quality Neural Audio Synthesis.” arXiv Preprint arXiv:2111.05011. https://doi.org/10.48550/arXiv.2111.05011.
Castelvecchi, Davide. 2016. “Can We Open the Black Box of AI?” Nature News 538 (7623): 20.
Dannemann, Teodoro, Nick Bryan-Kinns, Andrew McPherson, et al. 2023. “Self-Sabotage Workshop: A Starting Point to Unravel Sabotaging of Instruments as a Design Practice.” In Proceedings of the International Conference on New Interfaces for Musical Expression. https://doi.org/10.5281/zenodo.11189106.
Dehlin, Erlend. 2008. “The Flesh and Blood of Improvisation: A Study of Everyday Organizing.” Doctoral thesis, Trondheim, Norway: Norwegian University of Science; Technoloqy. http://hdl.handle.net/11250/148918.
Grüny, Christian. 2022. “Seltsam Attraktiv. KI Und Musikproduktion.” In Begegnungen Mit künstlicher Intelligenz, 174–204. Velbrück Wissenschaft. https://doi.org/10.5771/9783748934493-174.
Hardjowirogo, Sarah-Indriyati. 2016. “Instrumentality. On the Construction of Instrumental Identity.” In Musical Instruments in the 21st Century: Identities, Configurations, Practices, 9–24. Springer.
Hoedt, Katharina, Verena Praher, Arthur Flexer, and Gerhard Widmer. 2023. “Constructing Adversarial Examples to Investigate the Plausibility of Explanations in Deep Audio and Image Classifiers.” Neural Computing and Applications 35 (14): 10011–29.
Lautour, Richard de. 2015. “Sound, Reproduction, Mysticism: Thomas Edison and the Mythology of the Phonograph.” Música Em Contexto 9 (1): 23–53. https://periodicos.unb.br/index.php/Musica/article/view/19583.
Pfleiderer, Martin, and Hartmut Rosa. 2020. “Musik als Resonanzsphäre.” Musik & Ästhetik 24 (95): 5–36. https://www.musikundaesthetik.de/article/99.120205/mu-24-3-5.
Praher, Verena, Katharina Prinz, Arthur Flexer, and Gerhard Widmer. 2021. “On the Veracity of Local, Model-agnostic Explanations in Audio Classification: Targeted Investigations with Adversarial Examples.” In Proceedings of the 22nd International Society for Music Information Retrieval Conference, ISMIR, 531–38. https://doi.org/10.5281/zenodo.5624471.
Rosa, Hartmut. 2016. Resonanz: Eine Soziologie Der Weltbeziehung. Erste Auflage. Berlin: Suhrkamp.
———. 2019. Unverfügbarkeit. 2. Auflage. Unruhe Bewahren. Wien ; Salzburg: Residenz Verlag.
Von Foerster, Heinz. 2003. Understanding Understanding. Essays on Cybernetics and Cognition. Springer.