Music of Many Nows

Musical Configuration Spaces and the Networked Possible

Artist statement by David Rosenboom
Share this article
go to article

This entry is based on a transcription of a Keynote presentation given by David Rosenboom during the seminar: "The Power of Musical Networks", organised by the Orpheus Institute, on February 21-22, 2018. The colloquial nature of passages from the text are a product of this.

If I look back at a lot of my work and look for the “ideas that connect”, as Gregory Bateson would call them, there are many, but one of them is that I seem to be constantly working with some kinds of models of evolution, in one form or another, as I’ve been interested in emergent forms forever, it seems, and emergent forms often come from that. This led me to what I call "propositional music": A compositional attitude, a way of looking at how we begin, where are our starting principles. The idea is that one can make hypothetical models of almost anything: of the universe, of life, of evolution, of consciousness, of musical interactivity. And then make work that is somehow felt to be consistent with that model and, unlike the scientist that for his or her career has to prove to the world that their model is somehow consistent with ‘reality’, we have the great licence as artists to avoid that requirement. We can be very speculative; we can make things and just see what they are like.

I am looking for ways in which formal percepts both work for us and also sometimes need to be collapsed.

I am working with self-organising emergent forms a lot, with dynamic dimensionality: something like dimensional analysis of complexity and how in forms, when forms emerge, the numbers of dimensions that we use to describe them both emerge and disappear depending on how they might work.

I talk about musical configuration spaces, by which I mean ways of approaching the idea of a score, and I think of the score as an interface. (Making a) score is almost like interface design these days. It is the surface in between one source of creativity and another source of creative interactivity. Musical configuration spaces are ways of building forms, maybe as a score, where the score is a mapping of the possible states of a performance, and the pathway through them causes them to collapse into a particular realisation.

Extended musical interface with the human nervous system is something that I worked with for quite a few decades. After being away from it for a while, it is now exploding again because it has become a very big field, and now we have access to things that we could only imagine doing many years ago.

I stress active imaginative listening in my writing because I think of composition as being an extremely broad concept. I think that one of the most interesting forms of composition is listening, in that we have to never forget the fact that active listening is actually a creative process. We actually, by our selective attention mechanisms, by what we pay attention to and how we develop our ways of parsing and organising sound, synthesise something that we end up storing with the memory engrams that go into our being, coming from a creative act, a creative process. This a very important thing, especially when dealing with music and neuroscience these days.

(There are) places where languages of science and art can meet in deep theoretical territory. The word "artscience" is thrown around a lot now, there are books with that in its title, people are writing about it. What I mean by it is something a little different from what we are used to calling art and technology, where we are talking about applications of technological developments in the arts, or ideas that come from technological developments in the arts. Rather, I talk about places where the descriptive challenges that we face both in sciences and the arts actually can meet in some very interesting ways. Then, we look at some conundrums in the physical sciences and in other sciences today, when you are really dealing with deep theoretical issues, at the root of it is often the challenge of description: how do we describe what our model is. That is a very interesting place where art and science can meet.

It is almost impossible to think of something that isn’t a network, and it is important to remember how one can zoom in and zoom out. We talk about nodes, but nodes in themselves can be extremely complex things. So, we have levels of complexity, and we make differentiations among those entities of complexity. By differentiation, I mean reaching a place where we can say that ‘a’ is different from ‘b’, and then ‘b’ is different from ‘c’. But now we have ‘b’ as more like‘a’ than ‘c’ is like ‘a’, so now we have to have a metric of comparison, and it explodes like that. It is fascinating, and is an interesting way of thinking about composition.

In thinking about networks, I also think about communication, which I think always has to be co-creative. I had the wonderful experience a few years ago of being invited to a small working conference that was held in Paris; it was about ‘what we are not thinking about when we are looking for alien forms of intelligence’. I had written a little article called ‘musical notation and the search for extraterrestrial intelligence’. It was only two or three pages, but it got published all around, and I got invited to this conference because of that paper, in which I took the point of view that the ideal state of mind for a musician confronting a new form of notation is very similar to what I conceive as the ideal state of mind of an astronomer looking for life in outer space: We forget the range of possible manifestations of something that we might call life, let alone intelligent life, and we might miss it entirely, because we come at it with a certain preconception. An analogy is the anthropologist who goes into the rainforest and discovers an ‘unknown culture’ and destroys it because they approach the culture already with an idea of what ‘culture’ is. (Inspired by the Paris meeting, I soon wrote an extensive monograph in 2003: ‘Collapsing distinctions: interacting within fields of intelligence on interstellar scales and parallel musical models’.) The very same thing is happening today in various communities that look at music and the brain. There is a great deal of study going on to try to figure out how the brain processes music, without thinking about the range of what music can be. That is a huge thing, I have an essay on that available online, and you can get if you want: Active imaginative listening - a neuromusical critique.

Imagine two forms of intelligence confronting each other, and maybe they are nodes in a network. How does co-creative communication emerge? For me, lately, this also raises thoughts about time that are very interesting. I am starting to think about music as something that looks at time in a kind of ‘granular’ way, that is what I mean with "multiple nows". Imagine there are two forms of‘intelligence’, maybe one is artificial, maybe one is software, and the other is the brain. Or maybe they are two different cultures. What are the sort of‘temporal’ dynamics of them interacting with each other? Depending on the time/space/distance involved, I think of it as an emergent rhythm, waves of interaction. Something coming that way, and something coming this way, and they interact, and some sort of summation wave appears, and eventually, maybe some common imagination emerges. This is something that is talked about when looking at intelligent forms, maybe on earth, that we haven’t quite understood yet. I always make the joke: “after all these years of cognitive science, nobody yet knows what intelligence is, but everybody knows when is not present”. And of course, I made political jokes about that.

I was invited along with a painter, dancer, musician, filmmaker from Indonesia named Sardono Kusumo to produce the opening spectacle for the World Culture Forum that took place in 2013 in Denpasar, Bali, Indonesia.

It is kind of like the World of Economics Forum, but people don’t know it exists. It is actually huge. I was stunned by how many representatives, high government representatives from nations all around the world converged in the World Culture Forum. I was embarrassed that the US was pretty absent. But almost everybody else was there, and I was there.

We were going to do this piece in what used to be a large quarry, a limestone quarry outside of Denpasar, which has been closed down and turned into a cultural park. There was this big area surrounded by high stone walls, about the size of three football fields. We were the opening spectacle; it was to have the guests seated at a dinner and performers from all the countries represented. It was going to involve 600 dancers and musicians.

We would have very little time to put this together, so we developed and idea, we called it ‘Swarming intelligence carnival’. What we hit on was: Let’s use the idea of modelling migration to choreograph these 600 people. Of course, they all speak different languages, and many of them could not communicate with each other except by means of artistic gestures. We figured out an economical set of very small instructions that we could give to them (“you are going to move in this way, and you keep X distance from your neighbour”), you’ve probably seen this in swarming models, so we used them to make this piece. It was really fantastic, and we projected images of swarming behaviour on the stone walls as well, and we set up a surround sound system of the size of three football fields. Indonesian television got a helicopter view which would really show the swarms moving, but they haven’t been able to give it to me for some reason. So here is footage from someone with an iPhone on the ground.

I got involved with telecommunication concerts in the 70s, having musicians in different parts of the world communicating with each other and playing at the same time, and dealing with ‘time’ issues, and also with the idea of configuration spaces and scores. I began to think that the idea of linear time (the river of time, etc.) is NOT the way music works. Not the way interactivity works.

Music, in a way, manipulates the emergence of perceived time, but it can be an area in which one can experiment with the idea of multiple times. Imagine that every present has a fine structure, and every present has its own manufactured history and its own projected expectation of the future. A visual analogy would be the spread of these as if they are grains. So, I think about depicting fields, I Imagine various thought experiments.

Imagine that we start only with the concept of a ‘now’. But the ‘now’ has its manufactured ‘past’ and its projected ‘future’. Imagine that these slowly begin to separate. What happens? What emerges? I have a chapter in a book that is about that ('Propositional music of many nows' in D. Bogdanovic and X. Bouvier eds., 2018, Tradition and synthesis, Lévis, Québec, Canada:Doberman-Yppan). These are some steps of that thought experiment. When we look at the language we use, how do we actually understand what we mean? What does‘simultaneous’ mean? What does ‘separated’ mean? What is an event? All those things are really interesting to find out what we actually mean.

There are many experiences I have had where this becomes very useful to think about.

Here is an example of an early tele-concert. I had the opportunity in the late 70s, with the support of NASA, to actually do some experiments using biotelemetry data that was gathered from two sets of dancers on each side of the continent, and send them through a satellite to the other side of the continent where they would be sonified. They would cross over like that, and they would create an ensemble. I don’t have any documentation of that, but I have this. This is a time at CalArts in the early 90s, when Morton Subotnick and I had something there called the Centre for Experiments in Art, Information and Technology. We had a grant from the AT&T Foundation (which no longer exists) that supported about 5 or 6 years of doing experiments in tele-concerts. I got into doing some composing in which the time delays involved became part of the rhythmic structure of the piece. Here is an example using four Yamaha Disklaviers (upright computer-controllable pianos), two in Santa Monica and two in New York. We had another location in Santa Fe, New Mexico, all of these places linked. We built our own interfaces to send audio and video and MIDI data over ISDN telephone lines. Mark Coniglio designed those interfaces, he is the guy who developed Interactor and later, Isadora, the interactive media control software.

I have to credit Erich Jantsch who wrote a book called “The self-organizing universe” (1980, Oxford, UK: Pergamon Press) for this idea of how to depict presents, a present. I have found descriptions very similar to this in the writings of Nagarjuna, the Indian philosopher.
P20
Gauze small for web

Let me show you what I mean by ‘configuration spaces’. This is going to be in the form of scores. Here is one that dates from 1967. This score is 36 inches in diameter. At that time, I was interested in developing a graphic language that approached an idea of sounds made by instruments as being ‘raw sound’ (not notes). These symbols, each of these figures, have some kind of circle or box at each vertex, and that describes a way a sound can start and a way a sound may end. Along these lines are the transformations of that sound between its beginning and its end. The score is laid out as a map of various parametric relationships among the contents of those notations, which are rotated. The most obvious one visually is density, so there is a system of opposites. There was a transparent overlay and these maps were mounted, so the musicians could rotate them, which was a suggested way of coordinating an ensemble. A piece, or a movement of a piece, may be defined simply by ways of traversing: “We are going to play a movement where we make one traversal around the circle and each box will last for twenty seconds”, for example. The musician can rotate them to determine how they want to frame the material, or “we are going to go around one of the cloverleaf sections, which would take us through oppositions”, so we would find things on one side that are opposite in their parametric quality from the other side. Therefore, a form emerges due to the way it is laid out.

P22

(Here is) the late Bill Duckworth playing the trombone with his ensemble. Every musician had one score. It was very impractical, because I actually wrote a dictionary for every instrument that defined what these symbols mean, so the musicians to play this had to do a lot of work to become good at reading these symbols, and then to be able to fly around inside this star map. So, it was not very practical. But I have used the symbols in some other pieces, and I thought that it would be really nice if I could make it into a moving 3D score object, and we could get inside it. Maybe augmented reality might be able to make that possible.

Here is a much more recent example, from 2003. This is one section of a score for solo or multiple trumpets, in which each page is about ‘this big’, and there are four of them that make the piece. Each works with a certain kind of material. This particular one that I use as an example works with contrapuntal material. There are ‘actual’ notes there, five-line staves. No meters, as you can see, just proportional phrases. The way the score works is that each set of material that is bounded by a thick staff line is what I call a ‘musical unit’. The system of counterpoint in here is based on modes that don’t repeat at the octave. So, you keep going up from a starting note, but you don’t end up on the same note an octave higher. So, it is going to make a very big spiral over the full range of the instrument.

It is designed so that these things can be concatenated in any particular order, so that would be‘contingent’ possibilities, and they can be combined. I used that in the same way that Stephen Jay Gould talks about ‘contingent possibilities’ in evolution.(Gould, S.J., 2002, The structure of evolutionary theory, Cambridge, Massachusetts: Harvard University Press.) Or they can be combined with each other in multiple ways, so you can get polyphony, and those are the ‘adjacent possibilities’ that Stuart Kauffman talks about in his models of evolution.(Kauffman, S., 2000, Investigations, Oxford, UK: Oxford University Press.)

So, one or more players can traverse this, and I use the metaphor of collapsing, so the space collapses into a particular realisation as the performers decide to do it. It sounds like this.

00:00 / 00:00
P27

Here is another page of the same piece which works with rhythmic ratios of repeated notes and harmonic series melodies that have to be produced by holding a particular valve combination, so you have to play “that figure” with all three valves down.

There is a sort of narrative about the development. The first movement is only air sound, and then it becomes resonant, notes, and then eventually becomes melodic, then it goes through other things, and finally it comes out like ‘this’, the virtuosity that has developed returns to breath, basically this is another breath, like the way it starts.

00:00 / 00:00

Here is another one for piano, in which there are ways to go into the space on the left side and there are ways to move vertically in the middle and come out on the right side. It is conceived as a duet for pianists where the left and the right hand are treated as if they are separate people, so they play different materials.

00:00 / 00:00
P18

I am going to jump into something that is more network-related again. Here is another picture: of my multiple times-spaces in which now the complex moments, the complex ‘nows’are spread out in some kind of field. A lot of interactive work begins to look like this to me. Since this is about networking, I want to show you an idea for an interactive opera project that I did a few years ago, in which the idea was to invite a group of young composer-performers (because I had a grant for young composer-performers) from all around the world, to see if it was possible to collaborate on making a piece that said something about the world today.

I was able to set up something where we had ten, who I was able to bring to California for a two-week workshop. Then we had our project started. Then they all went home, and they worked for another year, all via telecommunications.

Working together, sharing images, sound-files, video, practising together, and so on. We set up a wiki space. The Opera is called “Ah!”, because its structure was inspired by the Buddhist Diamond Sutra, (Pine, R. trans., 2001, The Diamond Sutra, New York: Counterpoint), which exists in various forms: 80,000 lines, 8,000 lines and 1 line, and the Diamond Sutra in 1 line is “Ah”. It's hard to say “ah” and feel bad.

We were approaching the idea that the problem with the world today is one of pervasive illusion, and that is what this project addresses. The project was called A Counterpoint tolerance, and these are the institutions that were funded, that collaborated: Idyllwild Arts in California, California Institute of the Arts, and Dartington College of Arts at Dartington Hall in the UK. Dartington College of Arts later merged with Falmouth University, but it used to be an independent institution.

AH Linked Stories Mandala Wheel
P38
Ah Wiki Collaboration Page
P39
P40
P41
P42

I worked with a writer, Martine Bellen, a poet from New York, and we created for this a kind of mandala of thirteen stories, all inspired by the Diamond Sutra, but all set in modern times, so they might take place in the parking lot of a supermarket, another scene somewhere else… All scenes are connected by connecting lines, so if you read ‘this' story and come across ‘this’ line, you can go to ‘this’ story if you chose to do so and the narrative will continue right along, till you go to this line, and you can continue on this story, or you can move ‘this’ way. So, you move around this wheel in any way. Martine and I didn’t end up calling this an opera, but rather, an “opera generator”: you can set a pathway through this and present it, and that is one form of an opera. So, we built this wiki, we had all these ways of working. Here is an example of a working page where various people are contributing ideas. Then we developed it as a project that took place in several different forms simultaneously. This is just a visualisation of the density of activity geographically, and the way we did it. So, we thought of it as having three circles of activity, one of which was on the internet. We put out the word that we made a website that people could interact with, meaning potentially audience members, that we called ‘creative engagers’. People were invited on the website to submit things: a soundbite, a response to a four-line poem. So, this (new) material is coming in.

As we get closer, the next concentric circle was a gallery installation, surrounding the theatre that eventually there was a performance in, where those materials were gathered and made available. We built a large multitouch table, where people could access the stories. Parts of the stories would pop up, words would pop up, things that people reacted to would pop up.

We would eventually get to an actual theatrical production, which would be more time/space localised. And after the performance was over the exhibition space was still available, and after that was over the internet space was still running.

(The Ah! website no longer exists. Long after the performance, it was hacked and destroyed, and I haven't had the time and resources to recreate it.)

There is the multitouch table, the workshop, we eventually made a production in which we had to make the setup in ovals because of the shape of the performance space. This is a rehearsal… We used a dancer, an actor, and an actress to help connect the threads of the stories. The musicians all had individual setup locations. Each one had a workstation with its own speaker array. Each one had a laptop, in addition to their other instruments, so the electronic sounds they created seemed to come from where they were located.

We co-composed the entire thing. Eventually with live video projection on the ground, the audience sat elevated around the outside of the oval setup, so they looked obliquely down onto the projection on the floor. All the source material that goes onto the floor (projection) is live. There is a cameraman, part of the performance team, who is moving around catching imagery of the performance and sending it to video artists who put the results on the stage floor.

The "Ah!" website provided many ways for people to interact with the "Ah!" source materials and traverse the stories by selecting linking lines that took them on their own narrative pathways. We designed it so that there would be many, many ways to experience this opera. One of them is to read a text-score called "AH! opera-no/now-opera, story-word/sound-maps". And the text-score gives you cues to imagine certain sounds. So, I consider that a legitimate performance of the opera, if you actually do that, do this imagining. The text-score is also available in Spanish, which for us, in California is very important, as an example.

Some of the texts were translated into twenty different languages, appearing in different parts of the piece. There is one place where there was a network of infrared range finders, and they are in the space, and the dancer could move closer or further away, and by doing that, modulate the text into a variety of languages. But even on the website, audience engagers could find places to play.

Now, I want to jump to an example or two of recent work involving the brain. There is far too much to deal with here in terms of the historical work, but here we go.

There is the brain, pretty interesting network. There is a beautiful thing called “the glass brain” that does visualisation of signals going through neural pathways.

I began doing this kind of work in the late 60s. Much of it developed in the 70s, developing models for ways of categorising different kinds of brain signals and how they relate to both general states of consciousness, but also shifts of attention in relation to musical form. These are just depictions of that. And developing a kind of multidimensional modelling of that. Lots of recordings came out, and eventually developed to have an idea of a work which is always self-organising in which some system is producing sound, and you have someone whose brain is wired and connected, and the generator of sounds also has a primitive model of perception. I say‘primitive’ because what it looks at is changes or rates of change of particular types of parametric movements in raw sound. I use electronic sounds, so I can stay away from the language problems of other kinds of music, looking at things like ‘cadential movement’.

So, it is looking at pitch contours, amplitude contours, complexity of the waveform, roughly related to timbre, various kinds of density and things of that sort. The model predicts, where changes happen, ought to be or are likely to be associated with a shift of attention. And then it goes and looks at the brain signal to look for what I call “event-related potentials”. There is a complicated process in deriving event-related potentials, but they are transient waves that tend to be active when there seems to be a certain recognition of something, or there is a shift of attention.

If the prediction is confirmed, then the system feedback to the control structure will tell the system to make those kinds of changes, those kinds of parsing points, more likely to recur again.

If the prediction is not confirmed, then it uses ‘some’ method, it could be playing with genetic algorithms, stochastic processes, to make the whole thing wander off, so it evolves in some other direction, until it begins to capture something again. And what you tend to get are convergences towards order, and then divergences again. It is pretty interesting. This is mid-70s work. It even dealt with trying to listen to groupings of things, and calculate expectation about repeating patterns, and then try to see how those might get grouped into larger patterns. If any of you are familiar with James Tenney’s work, it is kind of a neurological analogy to his temporal gestalt, perceptual organization approach (J. Tenney From scratch, writings in music theory, 2015, L. Polansky, L. Pratt, R. Wannamaker, and M. Winter eds., Urbana, Illinois: University of Illinois Press.) Here is an example from my monograph, Extended musical interface with the human nervous system.

Hierarchical Structure Builder

This is an Interdata minicomputer, controlling a Buchla analog system. The Interdata minicomputer was big and hard to carry around, it used magnetic core memory, on a 10x10 circuit board you could get 8Kb of magnetic core memory, and it costed 3,000 dollars, just for that board. It’s nuts! Here is a section of the piece called “On being invisible” where sequences have developed because of this hierarchical prediction. (Full album available HERE)

Excerpt from "On Being Invisible Part I"

00:00 / 00:00

(But what I want to get to is actually) a new piece, (I actually want to show you two) in which I think about developing a kind of time/space model that is based on the detection of resonances, and those resonances interacting with each other, even in hierarchical groups, in a kind of network thinking.

I was contacted by some colleagues at the Swartz Center for Computational Neuroscience in San Diego, CA, who had read some of my work from the late70s and 80s, about things that I would like to do but couldn’t do because the technology wasn’t up to it, and they said “we think we can do that now, would you like to collaborate?”, and, indeed, a fantastic collaboration developed over the last four years.

One of the things we did was a piece called “Ringing minds”, in collaboration with Tim Mullen, who is a computational neuroscientist and rock musician, and Alex Khalil, who is a cognitive scientist and gamelan expert. We collaborated in this piece, in which we used a couple of very interesting processes.

P46
P47
P48

Here is a group of four people wearing headsets by Cognionics that were loaned to us for this piece, that wirelessly transmit multi-channel EEG. We are using a process called “hyperscanning”, in which we are treating the four people as if they were one big brain. Another important process is what is called “principal oscillation component analysis”, in which we look for resonances. This actually uses very sophisticated software that was originally developed for epilepsy research. It tries to detect when a particular neural circuit starts to ring, and the ringing is spreading. This is useful to try to predict a seizure before it happens. We used it in the piece to look at the ‘state-of-mind’, so to speak, of these four people. We compute the contribution of each individual to any resonance we detect, the weighting of contribution for each of the four people, and then we map that onto a computer music program that I made for this project using the Reaktor synthesis software. We are also using event-related potentials that are spatially averaged. Normally, event-related potentials have to be temporally averaged, because they are very tiny signals buried in the brain signal. Here we are doing it spatially, across the four people, to try to look for group attention shifts.

The premiere of the piece was in San Diego. A little more developed version of it was presented when I had a wonderful opportunity of a retrospective of my work at the Whitney Museum in New York, where we added a visual component, which visualises the analysis system.

So, there is this n-braingroup, this principal oscillation, or eigenmode analysis, the event-related potential (ERP), and I built a large complex resonator bank that mapped these components. There is a very specific mapping, this is in a book, (Mullen, T. et al., 2015, MindMusic: playful and social installations at the interface between music and the brain, in: A. Nijholt ed., More Playful Interfaces, interfaces that invite social and physical interaction, Singapore: Springer), and I can show you about exactly how we are mapping and indexing the things that we detect in the brain signal to this resonator bank, which I ended up programming in Reaktor.

For the visual display at the Whitney Museum, across the X axis is mapped the contribution of a particular resonance across the four people, the Y axis was the frequency of the resonance from about 1 to 40 Hz, and the colour splashes, akin to throwing a stone in a pond, these splashes, their colour and the degree to which they would spread out would have to do with the intensity of the resonance, its damping, its decay time, how strong it was, how much stability it had, whether it was wobbling, etc.

It sounded like this.

The ringing that you hear is the mapping of the resonances, and then what happens is that eventually, as the piece progresses, and the four, the brains, or what I call the horn section of the band, would settle, and then Alex, who plays a litoharp, which is a xylophone made of strips of granite that he can bow, pluck or strike, and me playing electrified violin, would play with the 'horn section', so we would make a sound that might cause an event-related potential. We can see the attention shift. And then we would interact with them, so then we would improvise together and create the rest of the music.

When you hear something going “wzyuuuuu-uuu-uuu”, that is the mapping of the shape of an event-related potential onto a keyboard, so it plays the shape of the wave, basically.

I just finished being able to make available video from those Whitney performances, so you can see what it is like ‘live’. The piece was structured in four sections, is about 30 minutes long. Part of the time-limitation was because those Cognionics devices start to hurt after a while…

Hopscotch was another collaborative project, which I was involved with in Los Angeles. We had a collaboration idea with director Yuval Sharon of The Industry production company. Five primary composers and five primary writers were engaged to create a piece that was going to have the audiences all driven around L.A. in limousines. The performances would take place in the cars and at locations where the limousines would stop. We had 24 of them, and they were all mapped around three different routes. So, you buy a ticket for the red, yellow, or green route, you are told to go to a certain street corner and somebody picks you up. You find a stagehand that helps you to get in a car, and you experience a performance. The car drives you somewhere, then you get out, and there is another performance where you get out, and that first car leaves with another group of people who have just been there, and then another car comes, and you get in that car, and it gives you another performance, and you go on. It was premiered in 2015.

P58 img01
P58 img02
P58 img03
P58 img04
P61

Video and sound were broadcasted from every car to a place called the Central Hub that was built for us, where there were 24 screens, so you could see what was going on in each of the 24 cars. That was free, you could go and stay there all day. This ran all day on the weekends through the fall of 2015. There were 36 scenes or chapters, some were animated, but a lot were live performances, which meant that over the course of six weeks, performers would have played each scene about 300 times.

One of my pieces for this project uses a very nice portable brain monitoring device called Muse which I used in one scene. I wrote several scenes with different kinds of music.

I wrote one song about a quinceañera for a fifteen-year-old girl, who has just arrived into the U.S., 'Lucha's 'Quinceañera Song'. In a later scene, called 'The Experiment', she gets involved with someone who becomes a kind of mad scientist. You get in the car, and he jumps in and sings to you that he is trying to figure out whether heaven and hell only live in the mind. Then, he puts these brain monitors on all the audience members, and then he signs them questions. The audience is told not to speak their answers. There is a computer in the car, analysing the response of the audience members, and I do a spectral analysis which is used to mix the pre-recorded singing of a soprano who sings potential answers in three different ways: a very rapid, nervous, rhythmic way; in a lyrical, chanting way; and in a sort of coloratura way. So is like you are hearing three different ways to hear a potential answer to the questions that this guy is singing you. And the questions get more and more aggressive, so you hear this mix of the voices. In a still later scene I composed, called 'Hades', Lucha arrives at a version of the river Styx, played by the Los Angeles River, from which, in the end, she escapes.

00:00 / 00:00

Imprint

Issue
#2
Date
09 July 2021
Category
Review status
Editorial review

Leave reply

Your browser does not meet the minimum requirements to view this website. The browsers listed below are compatible. If you do not have any of these browsers, click on the icon to download the desired browser.