The present paper investigates aspects of technologically mediated embodied interaction in dance-driven music performance based on a collaboration between a computer music researcher (Iannis Zannos) and a dance researcher (Stella Dimitrakopoulou) in the context of an educational research project. We explore how a system that permits dancers to generate sounds by means of simple wearable sensors which transmit their body movements can be used in a new setting of musical-dance performance shared remotely over the internet. The broader context and the motivation behind the project originated from the urge to explore the potential of low-cost and open source technologies for motion capture and real-time sound generation sensors as means of embodied expression and communication in a telematic dance performance setting. The goal is to use a framework that needs little more than a laptop, a regular internet connection (even low bandwidth), DIY wearable motion sensors, and a space equipped with monitor loudspeakers to perform collaboratively and concurrently in any number of venues irrespectively of their geographical distance.
The project started in March 2018, when the sabbatical leave of Iannis Zannos in Japan, created the opportunity to consider ways of collaboration between Japan and Greece. We expected that this would lead to or require new mechanisms of expression, communication, collaboration and performance practice, but could nevertheless not avoid being surprised by the forms in which these actually took when we implemented them. The way to realisation of working prototypes led over several iterations, where we tried out successively different types of hardware (Chip-Pro, ESP-based wifi-capable Arduino-based Feather Huzzah Arduino, Raspberry-Pi zero, and finally the Arduino-based sense-stage system using IOT mesh-network protocol XBee) (see figure 1), network software (originally online game VPN network service Hamachi and currently the open-source software suite OscGroups by Ross Bencina), many iterations of a SuperCollider library for communication, and live coding of sound synthesis and interaction design, several different approaches to sound synthesis and many different variants of sound synthesis algorithms, and finally many experiments in dance and movement with different configurations of parameters, different strategies for parameter and event control in sound synthesis, and different strategies for the interaction between dancers and dancer groups. Here we relate the experiences made during the final stage of this project, namely a telematic collaboration between two undergraduate courses which we conducted at the Department of Audiovisual Arts of the Ionian University in Corfu (Greece) and the Department of Performing and Digital Arts of the University of the Peloponnese in Nafplion (Greece). The main aspect of the theoretical background for this work was given by the Centenary International Symposium Xenakis 22 in Greece during May 2022, in which we presented a workshop based on the above collaboration (Zannos, Dimitrakopoulou, Marini 2022).
The project uses low-cost wireless IMU (Inertial Motion Unit) sensors (see figure 1) worn by the performers, which transmit motion data to SuperCollider in order to drive sound generation in real time.
Data and code driving a performance is shared between computers in real time, thus permitting the virtual replication or sharing of the performance between venues independently of actual physical distance. This is done by transmitting sensor data as well as the code defining the sound behavior corresponding to those movements over the internet using the open sound control protocol IOSC). This mechanism is coded as a SuperCollider library which is installed at the main performing computer at each venue, and uses the open source software OscGroups in order to broadcast both sensor data and program code to all computers in all participating venues (see figure 2).
This setup enables performers to influence the generation of sound directly through their movements, and thus to participate in performance not only as dancers accompanying a musical piece, but as co-creators of the music itself together with programmers / live coders defining the interactive sonic framework of the performance. The main objective of our work is to research in depth the methodology, the performative and the aesthetic aspects of this new situation. The foundations for this work were provided by the technical framework which is therefore discussed here first.
We used video calling platforms in order to converse in our rehearsals but also to provide visual feedback between the two groups of students in Nafplion and Corfu. (Note that the performance philosophy of the present project encourages independence from direct visual contact via video streaming tools. We explicitly targeted alternative ways of collaboration through exercises carried out without any visual contact between the two groups of students.) The working surface on the full screen during rehearsals combined several applications and visual elements: video calling platforms were used for dialogue and visual interactions of the students-performers, Unix-shell terminal was used to start and monitor the reception of data from the wireless sensors and the sharing of OSC data between venues. EMACS or SuperCollider IDE were used as environments for live coding, where program code written in the SuperCollider language was executed locally and simultaneously broadcast to all remote stations, to form the framework for sound interaction. This functionality is implemented as a library in SuperCollider whose entire code is found in the following repository on github: https://github.com/iani/sc-hacks-redux. Three characteristic code samples are provided here from this library (See folder CODE/Classes): OscGroups.sc provides the functionality for sharing code and data via OscGroups. 02UgenShortcuts.sc provides shortcuts for live coding both synthesis algorithms and control algorithms, including a mechanism for connecting to control busses holding the current values of data set by sensors. 03Project.sc contains the code for sharing configuration and execution of each performance project through folders contained in a single folder ~/sc-projects. The entire code for configuring and executing performances and rehearsals created using this Project class is collected in the following repository: https://github.com/iani/sc-projects This folder offers a simple way for sharing projects between venues. The code of all rehearsals and performances made using this tool since its creation in January 2022 is found in this folder. Characteristic code examples from the performance workshop project at Nafplion are provided here in the files contained in folder Scripts of the github repository TelematicDanceAndLiveCodingECHO5. These are:
SessionSetup.scd : Connect sensors to busses and start the sound synthesis process
01a_CFU_Niki_GlassSolo.scd : A glass-like chaotic sound algorithm controlled by student-performer Niki in Corfu.
01a_NPL_Elli_ShepardSolo.scd : A model of Shepard tone controlled by student-performer Elli in Nafplion.
01b_CFU_Niki_GunSOLO.scd : A machine-gun-like percussive sound controlled by student-performer Niki in Corfu.
01b_NPL_Virginia_GlassSolo.scd : A glass-like chaotic sound algorithm controlled by student-performer Niki in Nafplion.
01c_CFU_Niki_StandardL_Solo.scd : A sound based on StandardL chaos algorithm controlled by student-performer Niki in Corfu.
01c_NPL_Stefania_GlassSolo.scd A glass-like chaotic sound algorithm controlled by student-performer Stefania in Nafplion.
02a_Handbell_3Chords_a1_CFU.scd : Single sine tones grouped in chords, controlled by a duo of students-performers in Corfu.
02a_Handbell_3Chords_a2_NPL.scd : Single sine tones grouped in chords, controlled by a quartet of students-performers in Nafplion.
livecodingsession220529.scd Transcript of the code used during the final workshop in Nafplion (and Corfu), including the live code sent by the two live coders (Iannis Zannos and Takumi Ikeda).
Graphic user elements were programmed and displayed locally in the SuperCollider language environment in order to facilitate the monitoring of control data from the sensors and of sound synthesis processes. The following video shows a graphic window for monitoring the values of control busses set by the sensors and used to control the sound processes of the performance.
The figures above (3 and 4) show two snapshots of the screen setup during rehearsals from the viewpoints of Nafplion, with viewpoints on the two stages, Corfu and Nafplion respectively. A video snapshot from the point of view of the live coder Takumi Ikeda while live coding for the performance at the workshop in Nafplion is found below.
Following the call of the conference where we wanted to present our work, we chose as our main starting point a chapter from Iannis Xenakis’ work which best corresponded to the character of our work. We set out to discover practically how Xenakis' work and thought can impact the practices of dance and live coding in the contexts of cyber-physical computing / performance which developed predominantly in the decades after Xenakis' active career. Our work was based on the study of selected writings and works of Iannis Xenakis (Xenakis 1992), as well as studies about his work and life (Varga 1996, Harley 2004, Sluchin and Malt 2011). Our objective was to see how Xenakis' work resounds today in the teaching of performance - both in computer music and dance, using the notion of cyber-physical performance as object of inquiry. By cyber-physical performance we mean the use of sensors and network technologies to connect the processes which generate the sonic elements of the performance with the bodies of the performers through measurements of physical properties such as motion. This concept makes reference to cyber-physical computing as discussed by live coder and interactive system researcher Andrew Sørensen (see Sørensen and Gardner 2010, Swift et al. 2013).
In his writings as well as in his interviews, Iannis Xenakis delineates an expansive visionary aesthetic that connects musical composition not just to mathematics and philosophical thought, but also to many other social, technological, cultural, and scientific endeavours. To this purpose, we revisited his aesthetics and approach to music in order to provide a hands-on view on interpreting Xenakis' thought, aesthetics and work methods from the perspective of contemporary embodied or cyber-physical / live-coding performance practices in education.
As the thematic framework guiding our experiments chapter 4 "Musical Strategy" of Formalized Music (Xenakis 1992: 110-130) which deals with game theory as structuring principle for the creation of musical works. The rationale for this choice was that it enabled us to explore ideas revolving around the notion of games, which suggested itself as a fitting choice in the context of two groups engaging in dialogue in a technologically constrained framework (sensors, code, internet, real-time sound synthesis). This text describes a variable performance scenario for the pieces “Duel” and “Strategies”, using a selection of distinct textures whose sonic characteristics can be deduced from the textual description in conjunction with the score of the piece they refer to. Both the description and the score was subjected to analysis in order to translate these into choreographic scores on the one hand and code for the synthesis of event structures (score-like equivalents) or sonic textures corresponding to the sonic events described by Xenakis.
In the months before the start of the workshop we analysed the mathematics laid out in the “Musical Strategy” chapter and tested our understanding of these by code implementations in SuperCollider.
We then designed exercise scenarios based on this study. These exercises were tested initially through joint remote sessions between the two classes via video calling platforms. Additionally, students were introduced to working with motion sensors and sound, and had direct hands-on experience with the system in the course of several class rehearsals. Finally, we conducted remote collaborative sessions using the system described, in order to select representative examples for this work and to create the presentation scenario for the workshop (see figure 5). During the final phase of this work we were joined by the composer Takumi Ikeda who participated remotely from Tokyo via live coding on SuperCollider (see also file livecodingsession220529.scd).
The following video excerpts show characteristic moments from the scenes performed at the workshop, and audience participation at the end of the workshop:
The new condition enabled by the use of sensor technology, live coding and network communication requires us to re-negotiate the boundaries between the roles of dancer / musical performer / composer and programmer. Furthermore, data and code driving a performance is shared between computers in real time, thus permitting the creation of a new type of performance work existing in a virtual space and manifest through digital replicas of the performance between venues independently of actual physical distance. Using this framework we explore experimentally through rehearsals and performances how dancers can communicate or engage in dialog by means of the sounds which the system creates in response to their body movements. This bears some far-reaching implications regarding the nature of telematic dance performance as a potentially new genre. The present paper prepares the approach to this topic through a bottom-up analysis of the immediate performance issues encountered during our work. We structure our discussion around three main challenges that we dealt with:
The fact that dancers dance to the sound that they themselves generate, created feedback phenomena most clearly experienced as continuous adjustment of movement in response to the generated sounds. It also seems to impoverish the choreographic and physical vocabulary, as the dancers' focus is moved to the effectiveness or otherwise of the sensors' responses to their movements. As noted, the sensors are not particularly sensitive, so the dancers tend to limit movement exploration to the parts of the body to which the sensors are attached (their hands), undermining the physical aspect of the experiment. How can we create affordances to enable or facilitate the interaction of dancers with the sounds they generate? A basic prerequisite for this is for the dancers to be able to understand the causal relationship between their movements and the resulting sound. Such recipes are quite common in interactive scenarios, the two most prominent types being (a) the dancer triggers the start or end of a recognisable sound by performing an action that she can reproduce easily and reliably and (b) the dancer controls a parameter reflected in an easily identifiable perceptual sound quality such as pitch or amplitude by changing the position, orientation or speed of a sensor. However, such simple scenarios can easily become trite and overused. Our effort to expand and enrich these basic ideas by adding greater varieties of triggered sounds and more control parameters hit the complexity barrier almost immediately. It became very difficult for dancers to become sufficiently familiar and fluent in the use of interfaces that involved more than 2 or 3 sounds or parameters. Over time we tested several alternative strategies for dealing with these challenges. One most prominent insight was suggested in January 2022, immediately preceding the start of our collaboration, in a rehearsal by Jun Takahashi who had been accompanying the project since May 2018. The suggestion was to reduce the number of control parameters to the minimum (1 parameter) and to simplify the control modality as well (simple switching of a sound on-off). This resulted in a mechanism that could be immediately explored by the duo of dancers (Takahashi/Hisai) during that rehearsal after its implementation (see video):
We introduced this approach with both groups of students-performers in our workshop. This reduction to essentials proved a decisive litmus test for the cognitive and practical competence of the students. To analyse this in more detail would go beyond the scope of the present paper.
Control, Causality, Complexity, Simplicity
A first and essential need felt by the performers was to understand the level and the extent to which they can control the sound through their body movements. The fact that the sensors were not responsive to a wide range of movements from different parts of the body, but only to the part of the body that the sensor was attached to was experienced as a limiting factor. This was a technical limitation due to the fact that for cost reasons only 4 wearable sensors were available for each group of students-performers at any moment. For this reason we chose to experiment with one or at most 2 sensors worn on the hands of each student-performer. Technologies that track joints of the entire body cost orders of magnitude more (and we are currently checking alternatives for a solution that would be accessible to our institutions for future work). However, this limitation alone is just one side of the problem. It is complemented by a second, possibly more fundamental factor, namely the difficulties arising from the multidimensionality of the control parameters. Each IMU sensor used provided a 3 element vectors, representing acceleration along the spatial axes of x, y and z. Out of these we used only x and z - since these were the most reliable ones because they expressed yaw and roll rotation relative to the vertical direction of earths gravity field. We could thus use 2 control parameters per sensor or at most 4 parameters per person to control sound. This could be further extended by letting several persons operate on a single sound process. While the urge to add more dimensions, control parameters and sensors seems an instinctive choice at first sight, the complexity added by increasing control dimensions almost immediately lead to confusion which was more frustrating than the limitations of using the smallest amount of sensors. Thus, enriching the responsiveness of the system is a difficult task requiring extensive engagement and training with the system from all parties. Alternatively it is possible to use the limited and partial control or responsiveness of the system creatively as an expressive tool, by purposely contrasting movements of other parts of the body to that part which controls the sound. This could be seen as a higher level training exercise and could be attempted in future work. It should be noted that this assumes an analogue or causal relationship of perceptual similarity between the characteristics of sonic events and those of dance movements. This presupposition creates a simple and readily available point of reference. On the other hand, causal relationship or similarity is by no means a prevailing rule in dance with music. We believe that exploring other types of relationship beyond perceptual similarity or (quasi-) causality could be a very fruitful and important avenue. It poses however considerable challenges, which stem from the need to devise semantic interrelationships between sound and movement beyond similarity or causality. In fact, we did not shy away from this issue. We explored the aspects of non-causality and complexity as possible basis for enhancing sensitivity and finding alternate ways for expression and movement formation, as explained in the following section.
Chaotic and complex algorithms as a way to enhance sensitivity using sound-movement feedback mechanisms.
Algorithms with very complex behavior were effective in encouraging performers to explore fine nuances of the sound and to tune their movements carefully to the response of sensor-sound synthesis behavior. These algorithms produced sounds with striking characteristics which were more effective in solo sessions than in ensembles. Incorporating these in ensemble or dialog settings remains an open challenge. Two characteristic examples are given here from sessions with experienced dancers prior to the workshop sessions with professional dancers Jun Takahashi, Asayo Hisai and Tasos Pappas-Petrides:
In our experiments with students, we found out that these sounds stimulate imagination and curiosity and motivate students to experiment with the system as a way to create novel sounds. On the other hand, the complexity of this approach makes it unsuitable for ensemble work with students. We therefore limited use of such algorithms to solos, which we framed within ensemble pieces, in the manner of solo-tutti sections of the baroque concerto grosso.
Two groups of students and supervisors from the Performing and Digital Arts department in Nafplion and the Audiovisual Arts department in Corfu collaborated on a workshop that had two parts. The first part involved separate rehearsals and teleconferences between weeks 1-7, while the second part, between weeks 8-12, included joint rehearsals and live coding experiments with students and instructors from both locations. During the rehearsal period, various pre-existing examples from choreography and music were examined, and dance exercises based on these were tried out. Additionally, basic choreographic games suitable for building remote dialogs between individual performers or groups of performers were created. The following discussion presents these choreographic games and tasks in chronological order, from the perspective of the group in the Performing and Digital Arts Department in Nafplion, led by Stella Dimitrakopoulou.
A Dancing Duel: Scene 3
During the initial weeks of rehearsals, the two groups worked separately, each following a distinct path. The students in Nafplion, who were enrolled in the courses "Music and Performance" and "Dance and Technology: Tools for Composition and Improvisation," were introduced to Xenakis' "Duel" score. After studying the rules and structure of this compositional game, they were asked to create and present their own performing games in small groups. Several exceptional examples were experimented with in the classroom.
One of the games involved a Dancing Duel between two groups of students. Each group was tasked with choreographing three small, repeatable movement motifs inspired by machines, which could move in space. These motifs were created in three different ways: 1. the performer moves as a machine, 2. the performer moves as a small part of a machine, or 3. the performer moves as if using a machine. Nine choreographic motifs were selected from these, and all students learned them in order to participate in the Dancing Duel.
Later on, the students were introduced to pre-existing choreographic scores by contemporary choreographers such as Anne Teressa De Keersmaeker, Elizabeth Streb, Alice Sheppard, Lucinda Childs, and Deborah Hay. These scores were presented in their different notations (Edmunds 2020), allowing the students to gain practical engagement with choreographic scores. The students were encouraged to understand the various forms that a choreographic score can take based on the work's needs and each artist's aesthetics. Later, they used these scores as supporting tools to document the nine choreographic motifs they had created (as shown in figures 6, 7, and 8) and to incorporate them into their choreographies.
After adding sensors, the nine motifs were slightly modified to enhance the combination of movement and sound. With the sensors on, the students were instructed to pay attention to the sounds they produced while performing the nine motifs. When needed, a hand gesture was added to each motif to control the sound and create nine recognizable and repeatable sound motifs based on the duration of the movement motifs.
The Dancing Duel was a game played in duets, with two dancers in each round selecting two of the nine motifs to perform simultaneously in the same space. The objective was to find ways to combine the two motifs (movement and sound) in the best possible manner. Two examples of the dancing duets, along with the nine movement motifs, are displayed in the video documentation of a rehearsal below:
The Dancing Duel game was showcased in the third scene (refer to Scene 3 in Figure 5) at the Xenakis conference. While the game was played by
the duets in Nafplion, the students in Corfu improvised based on the soundscape that was created by the Dancing Duel.
Quad (1981) by Samuel Beckett: Scene 2
As part of our research on different choreographic scores, we also examined the score of Quad (play) (1981) by Samuel Beckett. This score is simple and straightforward, making it relatively easy to decipher and perform. Additionally, as a choreographic canon, it provided a gradual way to introduce the performers and sounds on stage. Initially, the students in Nafplion learned the choreography from the original score (Beckett 1984). Once sensors were added, we decided to incorporate a hand movement for each performer in order to activate and dis-activate the sensors. The sounds used were single tones to emphasize the repetitive rhythmic pattern and canon in the choreography. The following short video documentation is from one of the first rehearsals of Quad, where the choreography was combined with sound using sensors.
At the Xenakis conference, the students in Nafplion performed Quad during the first part of the second scene (refer to Scene 2 Part 1 in Figure 5 ). The rhythmic pattern of Quad served as a foundation, and the students in Corfu improvised with various sounds layered on top of it.
Telematic dialogues: Scene 1
While the students in Nafplion commenced their research by exploring choreographic scores and games such as Duel and Quad, the students in Corfu initially experimented with the sensors through a series of basic dance exercises. This was also because the group in Corfu had an earlier opportunity to work with the sensors, while the group in Nafplion faced technical difficulties and did not start using the sensors until the second part of the research, which took place from weeks 8 to 12. Once the group in Nafplion gained access to the sensors, we conducted some exercises jointly with the group in Corfu.
When we began incorporating technology into the students' dance practice, it became clear that we needed to start with simple scores and exercises. This was because the students-performers needed time to learn how the technological tools reacted to their movement and then be able to improvise with both sound and movement. One example of the exercises we tried out was mirroring each other's movements. A performer in Nafplion would dance while a performer in Corfu simultaneously copied their movements through the camera. They would then switch roles. Another exercise involved improvisation using basic elements of dance. This included exploring different levels of space (low, medium, and high), varying tempo with slow and fast movements, and using only staccato or legato qualities. We also incorporated a dialogue between performers A and B using these basic elements of dance. For instance, if Performer A moved in staccato, Performer B would respond with legato movements. Finally, we turned off the cameras and continued the dialogue using only sound.
From the exercises described above, we selected the following parts to be presented in Scene 1 of the conference (see Figure 5):
Overall, we started with few parameters that were immediately connected to perceptible sound characteristics, and then gradually enriched these until the means for control and causal relationships between controlling movement and resulting sound became opaque. We then reset to an even more minimal sound control paradigm, which we used as the basis for building group choreographies based on Duel and Quad. In these choreographies, several performers built one sound pattern through collaboration.
Due to the remote locations of the two groups, we frequently had to rehearse and provide guidance online. This was not a significant challenge, as we were already familiar with distance learning, having gained experience through online teaching during the Covid-19 pandemic. In fact, we found the use of cameras and video calling platforms to be helpful for our collaboration. Additionally, we experimented with turning off the cameras and focusing solely on the sound, which allowed us to explore dialogues between performers in both locations through communication solely via the sensors.
We confronted this issue at a late stage during our work, after the basic mechanics of our performances had been tested, the repertory of presentable material had been selected, and we faced the need to put this in a final order to present at the Xenakis conference. We had hardly any room for experimentation, and no precedents to orient ourselves. Instinctively, we chose as orienting criterion for building our scenario, the task of introducing the audience step-by-step to the technical mechanisms underlying the performance, in such a manner that they could perceive the effect of each mechanism, and how the form of the performance arose from it. Thus the technology and its effect on the artwork became unwittingly the main theme of our performance narrative at the final presentation. It was interesting to notice during the presentation the rise of additional layers of narrative through the dynamics of the interaction between the two performance groups and their members. Each performative act, such as entry, change of position, addressing of or response to other partners in the performance became target of attention and interpretation, especially since it was always framed by the need to guess the response of the system to the action and its role in the ensemble of the performance. It helped both the performers and the audience that we had chosen very clear and simple interaction scenarios, minimizing the degree of complexity and ambiguity. Even so, we experienced one failure in coordination between the two groups, which is analysed below in the evaluation section.
A second but just as important functional criterion for the performance was the need to preserve a sense of causality or orientation. In other words, both the performers and the audience felt more safe when they could identify the source or cause of events or changes in the system at each point. For example, what movement triggers what sound event? How does a movement modify the characterirstics of a sound stream? This condition of straightforwrd causality rapidly becomes complicated when large numbers of performers are involved on stage, and even more so when the sound structures are changed live by coders which are not visible to the public, and even more when there is a remote group of performers influencing the sound, which may be only partially visible to the public. In this situation, careful stage setting and planning of the structure of presentation is absolutely essential.
However, while causality is a first clear point for orientation, it is not an absolute condition for the performance to function well. The large majority of repertory of classical music is only partially dependent on this (for example in orchestral music or organ music the performers may be only partially or not at all visible). Here, other factors take the leading role. These may be described firstly as implicit understanding of the causality due to knowledge of the rules of composition or performance practice underlying the piece, secondly as sense of cohesion based on knowledge of the compositional rules of the style, and thirdly sheer immediate perceptual affect or emotional impact, due to the sonic characteristics of a passage. In other words, "making sense" could refer to several distinct things:
The idea of inviting the audience to try out our tools and form their own performance arose during one of our final discussions before the presentation. This idea had very positive results. (see the YouTube video above "Connecting Xenakis work to cyberphysical performance - Xenakis 22 Centenary International Symposium"). While this was a straighforward albeit effective move, it invites closer inspection, since participation is one of the foremost means for creating common codes of reference which lead to the development of a style, genre or common artistic language. In this context it is worthwhile to point out the relationship to complex and evolved primarily or entirely oral musical traditions as for example music and dance Subsaharan African cultures such West African drumming, Taarab and Pygmy Music, Ensemble musics of Uganda, Malawi and Madagascar etc., of music traditions of the Indian subcontinent, Persia and Turkey, and many other cultures. This is pertinent in the present context of collaborative art practices which are yet in the embryonic stage, such as this of the present project. (see section Conclusion, Outlook).
In our efforts to shape our presentation so that the audience would make sense of it, we implicitly relied on some of the mechanisms underlying the construction and perception of narratives explained above, to plan what we called a "meaningful scenario". Retrospectively, this work enabled us to build a general overview the notion of narrative in its broader sense. This experience of the function and aspects of narrative (causality and different forms of continuity and orientation) in an experimental performance form facing communicative challenges gave us a concrete sense of the different aspects and their interplay which can be useful for guiding future work.
First performances involving dance with sound controlled by the movements of the dancers appeared as early as 1968 with Merce Cunningham encouraging improvised responses to sounds and visual cues in interaction with the objects of David Tudor's "Rainforest", furthermore with the use of sensors to control sound pioneered by Carol Scothorn in collaboration with Max Mathews in "Gesture Controlled Music" (1972). Since then, prominent examples have grown in number, including pieces by artists and groups such as Wayne McGregor and Frieder Weiss, Survival Research Labs, Daniel Rozin, Aakash Odedra Company, Troika Ranch, and Adrien M & Claire B (Adrien Mondot and Claire Bardainne). Nevertheless, this type of dance has not yet reached the status of an independent and established performance genre, since the instances are still too rare and isolated to form a shared repertory of techniques and styles. In other words, there is no shared framework of choreographic or musical patterns for artists to build on, and for audiences to rely upon for their appreciation of performances. Consequently, we feel that it is too early to place the main emphasis in evaluating the aesthetic outcome of this experiment as performance art or dance genre. Implicitly more than consciously our motivation was to explore the potential of telematic collaboration in technologically augmented performance art forms. For this reason, we take a step backwards and comment on the entire project from a global point of view according to four criteria ranging from technological feasibility to cultural impact.
At the outstart of the project, we had already a system tested in performances, but it was unclear whether the system would work in an educational setting where a dance specialist (Stella Dimitrakopoulou) and a performance art specialist (Hari Marini) lead undergraduate classes. We had to deal with transfer and setup of sensors, and most importantly, to ensure that we could transfer sensor data as well as code via the Internet using OSC. This was not a trivial task, as the classrooms were in spaces protected by the University's firewall. We thus had to log into the Universities VPN in order to communicate. Overall, the technology of the project, while spartan and minimal, was flexible enough to function even under the stresses of leading and instructing undergraduate students in classes, and to permit experimentation and development of performance concepts in class. In conclusion, the sense was that basic technology, if carefully chosen and designed can serve well under demanding and complex circumstances, in telematic collaborative settings.
Handling a system that uses wireless sensor input to control sound and at the same time coordinates with a remote system over the internet, sharing both program code and sensor data in real time is not a trivial task. While in Corfu, this task was handled by Iannis Zannos (the author of the system), and during the final presentation by Martin Carlé, Postdoctoral Researcher at HAL project, in Nafplion the system was handled by the instructor, Stella Dimitrakopoulou, herself. Stella Dimitrakopoulou installed on her Linux laptop the required code for sensor data acquisition on Pydon (a variant of Python) and for sound synthesis on SuperCollider, and then in the course of several local and remote sessions learned how to run the system. The code for the experiments was sent live from Corfu during each session, enabling Dimitrakopoulou to run the system without any significant attention overhead, while instructing and communicating with the remote session partners via a video calling platform at the same time. This experience was encouraging, and indicates that such an approach is suitable for future work.
In Corfu, the first experiments were done with a small number of students, trying out some of the more interesting sounding algorithms in front of the class. The striking character of the sounds, and the direct way made a strong impression and captivated the attention of the students and motivated some of them to dedicate themselves to the project. Communicating the mechanics of the system to the students required some patience and careful work, and in the end we managed to coordinate ensemble work and to keep the students engaged.
The system's potential to create interesting and striking sounds and interaction mechanisms was appreciated by both creators and performers. It was necessary to test, modify, and improve the sensor interaction and sound control mechanisms on the fly. This was possible even under the demanding and limiting circumstances of working with a class of undergraduates with no previous knowledge of dance. Under these circumstances, however, only the most basic improvements could be carried out and we had to limit ourselves to a repertory which would ensure that the students would be able to perform with the system and carry out the interactions and scenario exercises that we envisaged. The potential of the system was felt; yet there is need for much work to refine interaction mechanisms and enrich and fine-tune the sound repertory. To help in this task, we saved code from the live-coding sessions for review. Towards the end of the work period, we developed a mechanism for saving both code and sensor data of entire sessions, which will enable us to evaluate future work in depth, and will aid in the refinement of interaction modes and sounds.
For ensemble work, it was crucial to limit ourselves to the most basic interaction and sound modalities. We defined a framework where each performer controls a single note by switching on and off with a turn of their hand. This enabled us to choreograph ensemble work for creating simple rhythmic patterns with four players in Nafplion (in Quad), which were answered by free improvisations from two players in Corfu. At the final performance, the coordination between performers actions and the response of the system could not be established on time, and we had to proceed with a flawed performance. While it did not correspond to our original score, nevertheless it could be carried out to completion. This was an important challenge in dealing with failure. It taught us the worth of persevering during a show, and of placing trust in the performers' capacity to cope, as well as the need to create better fail-safe mechanisms for catching situations where the software state becomes out of sync with the behavior expected by the human performers.
Shared language that forms a reference for telling stories, relating to shared experiences, and also as a framework for evaluating the experience of the artwork. In other words, what makes members of the audience say "I understood / did not understand this piece" or "I enjoyed / did not enjoy this piece" or other reactions that express the engagement of the audience with the work. At the most general level, this could be described as responding to the need for meaning. In general, we interpret this as the ability of the listeners to make sense of the performance by creating meaningful connections to their own experience, as discussed in section "Relationship to the Audience: Creating meaningful scenarios for public presentation" above. Our main points of orientation in the design of our presentation were the careful mise-en-scene of the elements to reveal causal relationships between movement and sound and the use of existing choreographic and musical form patterns or models. During the very limited time of our experiments, we also explored sound texture and interaction styles freely, and selected those elements that drew our attention and seemed more attractive. In the end, we constructed our own artificial narrative based on the choreographic and musical vocabulary of the group, and used this to create the scenario of the presentation. We kept this as simple and clear as possible in order to make it accessible to the audience. Naturally, the wish for future work is to engage broader narratives of other themes to try out the potential of this kind of work as performance genre.
Here we only outlined some of the most readily recognised aspects of this work. It is clear that there this kind of work does not fall within an established practice domain, and as such it cannot be judged with the criteria belonging to mature art forms. However the most pertinent contribution of this experiment is in our opinion the networked collaborative aspect. When comparing to telematic artworks of established representative artists such as for example Johannes Birringer, the distinguishing characteristic of the present work is that it uses a telematic framework as platform for enabling collaborative development of works. While the main idea for the project was developed by research and dialogue with conventional means, the actual development of the project relied on collaboration using the telematic framework as enabling platform, and its use was organically coupled with the artistic process. Several factors indicate that the work discussed here constitutes an early pilot scenario for developing more widespread collaborations amongst artists using digital technology and telematics to develop artworks collaboratively and may well constitute the seed for developing a new genre and tradition with its own repertory of stylistic elements. Clearly, the challenges of telematic collaboration are considerable. At present, telematic platforms are at an embryo stage, and there exist as yet no signs of a mature standard that can be shared by communities. Also, advanced technical fluency with collaborative platforms is limited to a minority, and the community of technologically literate persons with artistic ambitions is very small. On the other hand, many factors seem to favor this direction and can contribute to the further growth of this type of communities. Code sharing and digital content sharing is already a thriving and constantly growing practice. Hardware and software tools for motion driven interaction with live sound and image synthesis, for live collaboration with audio and video, for sharing code, and for collaborating at all levels, ranging from sharing of code and media assets to collaborative live forming of text, video and sound, to live videoconferencing abound and continue to multiply. All these factors indicate increasing support for projects similar to the present one. Most important, however, is whether there is sustained and strong incentive amongst communities to overcome the technical barriers and to motivate the level of engagement required to form this kind of work into an artistic practice. Our experience has shown signs of such strong engagement and fascination with the field, and has given us a first taste of what this entails. Our wish remains that we can continue to share such experiences in the future, and the hope remains that this will give rise to distinct artistic practices that result in their own communities, traditions, and cultures.
This project was carried out with financial support from HAL, a research project at the Department of Audiovisual Arts of the Ionian University, without which the project would not have been possible, and to which the authors are indebted.
The authors thank Hari Marini, the class instructor at the Department of Audiovisual Arts of the Ionian University for generously dedicating the semester of her class in "Narrative and Performance Arts" to this experiment, and for spending many hours of overtime to carry out this experiment to its successful conclusion. Thanks are also due to composer Takumi Ikeda for agreeing to live code for the project from Tokyo, during rehearsals and during the final performance. Last, but not least, thanks are due to Martin Carlé, Postdoctoral Researcher at HAL research project of the Department of Audiovisual Arts of the Ionian University, for offering precious technical support and solving hardware and software problems that enabled Stella Dimitrakopoulou to use the system in Nafplion, and for operating the performance platform in Corfu at the final workshop and public performance.