Making a pitch map for a vibrotactile feedback instrument

Article by Øyvind Brandtsegg
The pitch of a feedback instrument is governed by a great many factors, and small variations in initial conditions can significantly affect the resonance potential. Consequently, the relationship between performance gesture and resulting pitch can be complex. Deterministic but complex, sometimes bordering on the chaotic. This article shows the work process towards a pitch map of a vibrotactile feedback instrument. The resulting map is not intended to chart out specific pitches. Rather, we are looking for pitch trajectories in relation to the topography of the playing surface.
... +
Share this article
go to article

In my creative practice, I have been using feedback systems in various flavours over the last 25 years or so. In the context of this article, I would like to focus on the intrinsic complex behaviour of the audio feedback circuit, how to understand it better and learn to control it for expressive purposes. As the pitch of a feedback instrument is governed by a great many factors, and small variations in initial conditions can significantly affect the resonance potential, the relationship between performance gesture and resulting pitch can be complex. Deterministic but complex, sometimes bordering on the chaotic.

“Develop instrumental technique that is appropriate to the task of directing the flow of the soup” (Nicolas Collins - “Pea Soup” - 1976)

… so how, other than trial and error, do we go about to develop the appropriate instrumental technique?

The attempts at a pitch map shown in this article are not intended to chart out specific pitches. Rather, we are looking for pitch trajectories in relation to the topography of the playing surface. The instrument subjected to mapping is based on finger mounted piezo pickups in combination with a transducer on a resonating surface. The instrument is described in more detail in the paragraph Finger mounted piezos later in this article. Playing a feedback instrument will always be based on a dialogue with the instrument, sounding out, responding to what the instrument gives. Given an initial response from the instrument. the map can perhaps help create an understanding of where can we go from here.

Background

Audio feedback has been used in many aspects of artistic practice, as seen for example in the sonic performance works of Alvin Lucier, Elaine Radigue and many others. Sanfilippo and Valle (2013) give a thorough description and analysis of feedback in musical instruments. The making, playing and conceptualising of feedback instruments is discussed by Eldridge, Kiefer, Overholt and Ulfarson (2021). The work on creating a pitch map stems from my personal experience of making and playing feedback instruments in several different contexts. As my own reflections on the subject come from personal experience and artistic goals in various projects I have done, I will include a short description of these projects here.

Tube mic

Resonating aluminum tubes (2005-2007)

In my doctoral studies, I worked on techniques to improvise with, and together with computers. Part of the work centered on making software agents that could respond in a musically meaningful way to external input from a musician. This again led to an interest in systems that have an agency of their own, artificial intelligence, artificial life, adaptive and interactive systems. Many of the techniques I worked with were purely digital, software models. In that context, audio feedback circuits provided a kind of bridge between the physical world and the “in the box” digital adaptive systems. Thus, I became interested in investigating feedback techniques as a hybrid physical/analog/digital system with affordances and agencies of its own. Inspired by Agostino Di Scipio’s “Modes of Interference”, I set up a system consisting of a resonant physical object combined with adaptive digital processing. As the resonating physical object, I used aluminum tubes with diameters in the range 6 to 20 cm and lengths of 1 to 3 meters. A microphone was mounted inside each tube, and the speaker mounted either inside the tube or outside in proximity of the tube.

The adaptive signal processing was done digitally and consisted of

  1. Automatic gain control,
  2. Adaptive spectral masking (FFT analysis, gain reduction on strongest bands before resynthesis), and
  3. A bank of adaptive band-reject filters using pitch tracking to control the cutoff frequencies.

The spectral equalization provides dynamic and quite controlled feedback reduction while the sweeping (due to pitch tracking) band-reject filters create a lively and almost predictable extra layer of resonance suppression.

With these processing techniques the resulting feedback oscillations would continuously shift between different potential resonances of the aluminum tubes. This creates an ambient drone-like continuously evolving sonic texture.

Irrganger2 collage2

This technique was used for live performances in 2005-2007, and was also developed into a sound art installation, “Irrganger” (“Labyrinths”). This work was a collaboration with visual artist Viel Bjerkeset Andersen for the new building of Steinkjer high school (2007). She designed aluminum tube sculptures representing walkways and corridors of the building. We fitted the sculptures with small internal speakers and microphones, enabling an audio feedback path through the resonant chambers of the sculptures. The design thus allowed sound to flow through the sculptures in a similar manner as students and teachers moved through the building.

Irrganger Sculpture3

00:00 / 00:00
Feed mic 2038

Shotgun mike, “Feedback piece” (2012)

With the aluminum tubes described above, the physical system does not change (much), and the adaptive audio processing algorithms control most of the variations that happen in the sound. As a more performative approach, I explored the use of hand held supercardioid (shotgun) microphones to be able to seek out room modes and hot spots created by speaker characteristics. Obstructing the sound path by placing a hand between speaker and microphone would also affect the overall frequency response, and changing the distance between speaker and microphone could be used as an overall tuning strategy. The concept of seeking out room modes in this manner is somewhat similar to Nicolas Collins’ ideas for “Nodalings” (Collins 1974). The digital audio processing chain for the shotgun microphones was similar to the setup used for the aluminum tubes, with automatic gain control and adaptive filtering. Additional gain control was enabled by EMG (muscle activity) sensors, where muscle effort of the hand holding the shotgun microphone would increase the gain. The EMG control allowed the performer to tease out resonances in cases where they were very weak, and also to “hold on to” resonances once they had been established. The setup is described in more detail by Lazzarini et al (2016, pp 443 - 453).

Superstring dsp 1

Superstring theories (2017-2019)

Experiments on a networked remote-connected feedback instrument. Piece performed in collaboration with Bernt Isak Wærstad on several occasions, latest performance at the International Csound Conference 2019.

Superstring Theories is based on a physical model of a string (one could say, an extension of the well known Karplus-Strong model), divided between the two musicians. It is a one-string two-player instrument. Each of them has part of the string, can tune it and inject energy into it by means of contact microphones, while the injected energy also resonates in the other musician's part of the string. When one part is tuned, this also affects the tuning of the other part. Each part has its own feedback circuit, as well as the global one spanning the whole string. This means that some local control over pitch is maintained even when the other part of the string is re-tuned. For some performances of the piece, the musicians are geographically separated locations, connecting the pieces of the string together over remote network connections, In other performances, the two musicians are located at separate ends of the same room. Even if the instrument does not contain any acoustic or physical feedback circuit, the tone generation is based on the resonances and feedback in the digital string model. Also, the performative situation feels similar to interacting with a vibrotactile feedback instrument, as the nuanced control of feedback gains largely determines the character and potential of the instrument. The string segment at each end uses linked control parameters as shown in the figure.

Bone conduction 1

Bone conduction feedback (2018)

For the opening of the gallery TEKS Studio (Trondheim Electronic Art Center) in 2018, I collaborated with Arnfinn Killingtveit on a multi-part sound and media installation. One of the components of this installation made use of audio feedback through bone conduction. When a transducer is brought into contact with a bone in the human body, a vigorous vibration can be experienced. Feedback is achieved by touching another bone to a piezo pickup, and the resulting resonant frequencies incorporate the natural resonances of the bone path between transducer and pickup. In the installation, the audience could experience this by sitting in a specially rigged sofa, putting their elbow on the elbow rest and touching their finger on the piezo pickup.

Finger piezo video still labeled

Finger mounted piezos

Bone conduction feedback techniques were further developed in the project “Vibrotactile materials in artistic performance” I collaborated with Alexandra Murray-Leslie (at the Academy of Fine Arts) and Kaspar Lasn (at the Department of Mechanical and Industrial Engineering), both from the Norwegian University of Science and Technology. In this project, we wanted to investigate vibrational properties of different materials using vibrotactile feedback techniques. We looked at vibration modes in relation to object topography, and also in relation to stiffness of the material. In order to facilitate intuitive and flexible exploration of vibration modes, we used a feedback circuit with a transducer mounted on the object, and a piezo pickup mounted on a finger. This allowed for flexible repositioning of the pickup with relatively low noise from mechanical handling. As for the bone conduction installation, very simple audio processing was used, with just compression and equalization - no adaptive filters.

Performative exploration of the different vibrational modes of the instrument is thus possible, as a vibrational mode is activated by touching the corresponding position on the plate. The finger acts as a filter in the feedback circuit, and variations in performative gesture (finger pressure, angle of incident, touching with the nail or the flesh) can thus selectively bring out potential resonances of the object.

The vibration modes of the plates can be modeled numerically, but a number of additional factors affect the resonances of the feedback loop. These factors are well known from the literature (e.g. Boner and Boner 1966, Nielsen and Svensson 1999) , relating to the frequency and phase response of the combined components in the feedback loop. Even though the factors are well known, the interactions between them are complex. The resulting system (instrument) thus balances between instability and controllability. In particular, the influence of phase plays a significant role in the increased complexity.

The object instrumentarium used with the finger-piezo technique was expanded to include architectural elements (window panes, metal staircase) and also kelp-alginate bioplastic plate-like objects. With this instrumentarium, a 17-minute piece was developed for performance at the OnlyConnect festival in Trondheim 2021.

Onlyconnect 2021 scnshots

Mapping the pitch space

As part of learning to play any instrument, we may want to seek an understanding of how each attainable sound can be produced in a repeatable manner. Not all performers will approach it like this, and not all instruments allow such a controlled technique with repeatable sonic results. Many feedback instrument performers desire the wild and unpredictable, the non-repeatable, the instrumental dialogue bordering on the chaotic. While I am interested and intrigued by the chaotic and rich interactions of a “wild” instrument, I also search for ways to be able to intentionally produce a specific sonic result. To “play what I hear” as we learned in jazz school. So, how do we learn to know this instrument? Is it best to play intuitively and explore the instrument in action, or, can this exploration perhaps be supported by an analytical approach and a map of the potential sounds that the instrument can make? With traditional acoustic instruments, the pitches available on the instrument usually follow relatively simple physical relationships. For example the relationship between the length of a vibrating string (or the length of a tube of air) and the pitch it will produce. With feedback instruments these relationships are somewhat more complex, due to the interaction of the system variables as mentioned above.

Model collage
Freq plot2 closeup

Looking at the components of the feedback system, we have tried to analyze some of the elements that we assume contribute significantly to the characteristics of the system. My colleague Kaspar Lasn ran numerical models producing the predicted vibrational modes of the plates. A high number of modes were modeled, of which a select few are shown in figure 8. The audible resonant frequencies of the plate were also measured tapping the plate and recording the sound. This recording shows the presence of resonances at 80Hz, 117Hz, 209Hz and 375Hz.

The potential resonances of the complete feedback system is also affected by the frequency response of the pickup, transducer and the processing circuitry. NTNU colleague Peter Svensson helped me measure the frequency and phase response of the feedback circuit without the plate, touching the pickup directly to the transducer.

These models did help in gaining an understanding of the contributing factors for feedback potential and thus the available pitches of the instrument. The general properties are understood. Still, the specific properties that the instrument shows when it is being played is more complex than I personally am able to deduct from the models alone. We can assume that the influence of the phase response is in large part responsible for this complexity, as we can control playing position (selection of vibrational mode) and frequency response (how we use the finger towards the plate, flesh vs nail and so on), but it is harder to intentionally control the phase response. We have, however, experienced what may be a phase shift due to varying finger pressure. More on this below. As shown by Boner and Boner (1966), feedback oscillations can occur even where there are no significant peaks in the frequency response, given the appropriate phase conditions.

9x9

Sampling and mapping

As a complementary approach to the numeric and analytical models, I then made an attempt to create a map of the actual frequencies available when playing the instrument. A set of audio samples were recorded, produced by moving the pickup along a grid of positions (see fig 10). Each audio sample thus represents the produced pitch for the pickup at each physical position. The influence of finger pressure was attempted minimized by performing the sampling with as equal pressure as possible for each sampling point. Admittedly, this is not precise enough for a scientific investigation, but it was deemed to at least give some insight. The results should be interpreted with this potential noise in mind.

210629 1328 30x30 plate 9x9 soft finger 5ptape reverb cet CET R1 r 30 2000
Table mount plate
210629 1334 30x30 plate 9x9 soft finger 5ptape reverb handheld cet CET R1 r 30 2000
3d snapshot 210629 1334 30x30 plate 9x9 soft finger 5ptape reverb handheld cet CET R1 r 20 3000

The audio samples were then subjected to a regular spectral analysis (FFT), using peak picking to indicate the resonating frequencies. Each audio sample would usually exhibit several resonating frequencies and this raises the question of how the pitch map can be represented and visualized in an intuitive manner. It seems natural to allow the physical dimensions of the plate to be represented in the 2D plate, like in fig 10 (grid). To allow a visual representation of the multiple frequency peaks for each of the sampling positions, I used circles centered on each position. Large circles representing low frequencies and smaller circles for higher frequencies. As the frequency of each resonance is an important aspect of the map, a color map was also used to further enhance the visualization. Several different color maps were tried. To allow all frequencies a similar visibility, a perceptually uniform colormap should be used. A rainbow colormap from https://colorcet.com is used in the current implementation. Line width for each circle represents the amplitude for the peak, so it is to some degree possible to discern which frequencies resonate more strongly.

Different methods of mounting the plate were tested, with resulting pitch maps from two methods shown in Fig 11 and 12. It seems likely that the different mounting methods (suspension/table mount, handheld) affect the resonating frequencies, but also likely that the overall topography of resonances will be similar regardless of mounting method. We can see similar patterns of resonances over the surface of the plate in fig 11 and 12. There are some clear differences, but also so many similarities, see for example the pattern of lower frequencies (red) extending in a large circle from (x,y) position (1,5) to (5,9) to ((9,5), to((5,1). Also note the close correlation between the two maps at sample position (5,6) and (9,9). Even though the map using a suspended table mounted transducer can be said to be a more repeatable or “objective” setup, the version with a handheld transducer is closer to the performance situation with the instrument. In the further work, the handheld configuration has been explored more.

In order to make it easier to read the relation between frequencies of the different sampling positions, a 3D version of the circle plot was also attempted (Fig 13). Even though the 3D plot has some additional affordances and the ability to line up frequencies along a dimension by rotating the figure, the 2D plot has been the focus of further development. The 2D plot was easier to extend it with overlays for audio triggering and additional labels. More on this below.

210629 1409 30x30 plate 9x9 soft finger 5ptape reverb handheld freeq explore cet CET R1 r 30 2000

Sampling single tones or more complex potential?

Making a map of single tones sampled from the instrument let us get some insight into the topography of the resonance potential of the physical object. Still, when working with audio feedback we know that there might be multiple resonance potentials available. A very neutral approach like the one approached above (attempt to get the same conditions for each sampling position) might show the global topography of the object, but perhaps we also miss a significant amount of the variation attainable by performative action. Another approach might be to try to find as many different resonances as possible on each position on the plate, and use these recordings as the basis of a map. This more explorative approach is shown in fig 14. We see significant overlaps (as expected) from fig 11 and 12, but also some unique resonances that did not occur with the more neutral sampling technique.

Sound map

Playing with the map, playing after the map

Connecting sound and visualization

The visualization on its own can become a bit detached from the musical instrument it is meant to support an understanding of. To alleviate this, it can be useful to be able to connect the visualization with the sound of the object at that position. Using mouse position on the map, the audio sample used for the spectral analysis and peak picking can be played by clicking on it. Furthermore, it might be useful to hear the extracted peaks resynthesized both separately and as a compound sound. Single peaks can also be associated with (the closest) pitch class, which can also be of help for western/classical trained musicians. For these reasons, each sampling position was elaborated with an audio trigger overlay as shown in fig 15. The area of the figure where compound resynthesis can be triggered is sensitive to position within the designated area, controlling relative strength of partials in the compound synthesis.

Can I play on the instrument what I find on the map?

After making the visualization and playing around with the audio triggering mechanisms on the map, it was interesting to attempt playing the same sounds on the physical instrument. With any luck, I should be able to reproduce the same sounds again, as it was the source material for making the map in the first place. With vibrotactile instruments’ high sensitivity to small variations in system parameters and variables, I did not expect this to be straightforward, as the sampling process had been done several weeks prior. On some occasions it is quite possible to reproduce the sonic content from the visualization, for the same positions on the plate. At other times, only some partials can be reproduced, while others appear that were not shown on the map. This behavior corresponds with what is reported by performers in the excellent paper by Eldridge et al. (2021), the instrument might behave differently from day to day and also change its behavior during performance if external factors change. Even though one can not expect a precise one-to-one reproduction of sounds from the map on the instrument, the general behavior of the instrument is similar. This might mean that the map does not correctly represent (all of) the potential sounds of the instrument, due to errors in sampling or in the analysis and representation of the data. It might also mean that the instrument sometimes exhibits transposition or offset in pitch due to a change of external factors (e.g. temperature). The difference between map and reproduced pitch might also suggest that the performer needs to practice more to achieve a higher degree of nuanced skill. Even though the correspondence between map and physical reality is not yet optimal, it has for me led to a better understanding, as a step of familiarization with the instrument. The process of making the map has led me to spend time with the instrument in a structured manner, and the map can tell me something about what to expect from the instrument. When I can reproduce a sound from the map it is reassuring, and strengthens the memory that this sound can be produced at that location. In the cases where I can not reproduce what is on the map, it creates a tension that needs to be resolved, grounds for reflection on performance technique and reorientation. The notated expectation (map) creates a firm anchor for what has previously been produced on the position on the instrument, so also the divergence between expectation and produced sound in the moment is both distinct and clear. This can obviously be useful in guiding further refinements of performance technique on the instrument.

210629 1334 30x30 plate 9x9 soft finger 5ptape reverb handheld cet CET R1 r 30 2000 pn

Analysis of potential pitches

As the measuring process involves considerable scope for variation in the measuring conditions, we can only analyze the results in terms of general tendencies. The specific pitches observed may differ due to variations in external factors (temperature, nuances in microphone position, finger pressure etc), so we should interpret the analysis looking for pitch relations rather than specific pitches. In some of the mapping methods, frequency is represented as pitch class (western chromatic scale). The quantization of frequency into semitones will not capture the fine nuances of tuning, but it is practical when looking for pitch relations (e.g. a minor third, or a fifth etc.). The frequency map was thus extended with an overlay showing the note names of the four strongest components as shown in fig 16.

210629 1334 30x30 plate 9x9 soft finger 5ptape reverb handheld cet CET R1 r 30 2000 pn pc

In order to visualize broad tendencies regarding the strongest fundamental pitch for each position, an overlay was generated where the dominant pitch class is represented as a colored square covering the corresponding position on the plate. The color is determined by the pitch class found in two or more of the four strongest components. If the four strongest components contain two pitch classes with two occurrences of each, the square is colored with a diagonal hatch pattern of those two colors. This overlay is shown in fig 17. Positions that have four unique pitch classes are not colored (this did not occur in the analysis for fig 17), and these might indicate positions where a higher degree of tonal variation is possible.

210629 1334 30x30 plate 9x9 soft finger 5ptape reverb handheld cet CET R1 r 30 2000 pn il

It might also be interesting to see the extent of pitch class variation available for each position on the plate. This could indicate positions where the performer might be able to bring out interesting harmonic effects as “chords”. Intervallic relationships between the four strongest components were analyzed to produce an interval vector. The interval vector could be visualized in a number of different ways. In order to enhance the visibility of harmonic complexity, an attempt was made to use annotation that would look simple (at a glance, looking at the whole image) for intervals with small integer ratios (like a fifth), progressing to more complex-looking visualization for less simple ratios. A horizontal line (-) was used to represent a fifth (= fourth in the interval vector), while a vertical line (|) represents a third (minor third covering the lower half of the line, major third covering the upper part). A major or minor triad (chord) would then look like a cross (+). Further extension (minor/major second and tritone) can be seen in the map legend. Figure 18 shows this intervallic representation for each position on the plate.

Global tendencies

It seems like we have a general tendency of higher pitch in the center than towards the edges of the plate (by approx 3 semitones in the case of the handheld plate, C# to E). This pitch continuum is present at most of the positions of the plate, sometimes replaced by a fundamental at a distance of a fifth from the expected pitch. The distance of a fifth could perhaps be explained by the presence of harmonic partials at each resonant frequency. The harmonic partials could stem from resonance potential, or from nonlinearities in the audio processing chain (saturation).

The set of potential feedback frequencies for a specific playing position are often to be found in pitch relations that are similar to harmonic overtones. When we say “similar”, it should be clear that all feedback frequencies can exhibit a pitch bend, sometimes of more than a semitone, so it might be hard to determine the exact single pitch. Despite this fact, we can notice that the pitch often will jump in a pitch relation of approximately an octave, a fifth, or (an octave and) a third. We propose to use the term feedback partials to designate the set of potential feedback frequencies.

Some positions on the plate are much more rich in potential feedback frequencies, while some positions will only resonate on a single (or a few) frequencies. Such rich spots are perhaps located at positions where several vibration modes intersect favorably. We have not been able to reliably predict which positions will be rich, but we have identified some areas that might show such rich behavior. The presence of a minor or major third in the interval vector might suggest such a rich behavior.

Refinements and further work

The process of mapping these types of instruments can be refined in numerous ways, and there are a multitude of ways that the instruments themselves can be further developed. Similarly, the playing technique can be refined greatly, achieving better precision of nuanced expression on the instrument. Regarding the mapping, both the sampling technique and the audio analysis can be improved. It seems that the attempts at sampling the instrument in the most neutral (flat, objective) manner will only capture a small part of the potential sounds available to a skilled performer. This might be an obvious conclusion for anyone who has ever tried to create a nice set of audio samples of an acoustic instrument, however, a scientific analysis might initially approach this neutral condition for reasons of repeatability. For this reason the recording of audio from the instrument must explore the widest selection of possible sounds from each position. This in itself might require some additional preparation in cataloguing all sounds possible on the instrument, which is what we are trying to do by making the map in the first place, so the mapping process might have to happen over several iterations. The audio analysis of such richer selections of sounds also requires some adjustments. In the current implementation, recorded sound from the instrument was automatically segmented (one segment per position on the instrument) and an FFT was done on each whole audio segment. The duration of each audio segment was in the range of 1 to 3 seconds, allowing for very large FFT sizes and correspondingly, very high frequency resolution. This works very well if the composition of partials do not change much during the audio segment. However, if we have a partial in the sound that is only resonating for a short time, it will be represented with a low amplitude in the whole-segment FFT even if it had a prominent amplitude for the brief period it was present. It is straightforward to do the FFT on shorter segments of the sound, and also doable to collect the spectra from those separate FFT’s into a combined representation for the whole segment.

Phase freq feed

One potential issue is how to treat spectral peaks that deviate with just a small amount between FFT frames. If the deviation is very small (a few Hz), we can assume that it stems from the same vibrational mode or resonance potential and then avoid counting it repeatedly for each frame it occurs in. Then again, what is a reasonable threshold of frequency difference between different vibrational modes? Some of these might be quite close in frequency. Also, we can modulate the feedback frequency for a resonant mode significantly by subtly changing the phase of the feedback loop (more on this below). Assumedly, the range of phase-induced frequency change can exceed the expected minimum distance between vibration modes on a physical object, so there is a chance that this quantization issue can not be completely resolved.

In performance, we have noticed that the pitches attainable on the feedback plate instrument can oftentimes be bent slightly by modifying the pressure of the finger against the plate. One can look at the finger as a filter, as the sound passes through it between transducer and pickup. Changing the finger pressure then changes the frequency response, but also the phase response of this component in the feedback loop. Merely changing the frequency response could (as far as I can imagine) allow a switch to another resonating frequency, another of the prominent peaks in the system’s overall frequency response. Changing the phase by small increments however, can allow a slight modification of an already resonating tone (since the phase change will modify the effective round trip delay, and will still “sing” if the frequency response and the system gain still allows resonance at that new frequency). Changing the phase by a larger amount might again make the resonating frequency switch to another resonance potential altogether. See fig 17 for an illustration of how a phase shift can affect the resonating frequency in sweeps and steps. The interaction between frequency and phase response seems to be the origin of much of the complexity we experience with feedback instruments. The ability of a musician to intentionally control the phase of the produced sound is quite uncharted territory as far as I know. Still, this is one of the aspects of performance a feedback musician must contend with.

210816 1446 9x9 explore pos3 6

Regarding the representation of sounds on the map, there might be room for improvement. The circles in the current map work well to represent clearly defined spectral peaks. However, they might not be optimal to represent pitch glides due to phase manipulation of the feedback loop as discussed above. Figure 18 shows a spectrogram of a longer exploration of one single position on the plate. We notice a gradual pitch shift during the first 10 seconds, and then a stepwise pitch shift (approximately semitones) from around 11 to 26 seconds. These were created with fine adjustments to the finger pressure, presumably creating phase shift. At several points in the spectrogram, we can observe the flip from one “feedback partial” to another (e.g. at 11, 26, 30, 34 and 39 seconds). Capturing these nuances in a visualization that also allows an intuitive view of their topography (across positions on the plate) is still a task that needs solving.

Conclusion

The article has shown the process of making a pitch map for a vibrotactile feedback instrument. The pitches created by a feedback loop may deviate according to minute variations in physical conditions and the mapping process thus does not aim to create a map of absolute pitches. Rather, the process outlined in this article intends to show potential pitch trajectories available on the instrument, and also indicate regions of the instrument where one can expect certain types of behaviour. Some regions show richer behaviours with multiple potential pitches, while other regions of the instrument show more predictability and less variation in potential pitches. The resulting visual maps can be used as a means of assistance in exploration of, and familiarization with the instrument. Furthermore, the pitch mapping process itself can be seen as a method to spend time with the instrument in a structured manner. One aspect of this process lends the performer to a systematic exploration of the instrument's topography, another is the exploration of reproducibility when comparing the currently produced pitches with the ones previously mapped on the same region of the playing surface. The indicated pitches thus can act as stable points of reference in exploration of an instable field. 


Software 

Parts of the mapping process might be modified and adapted for exploration of other vibrotactile feedback instruments, and the software for analysis and visualization might thus be of use to other practitioners. For this reason, the software has been made available at https://github.com/Oeyvind/feedback-plate-analyzer 

The software used for the analysis and creation of visual pitch maps was written by the author of this article, using Python, numpy and matplotlib.

References

Boner, C.P. and Boner, C. (1966). Behavior of sound system response immediately below feedback. Journal of the Audio Engineering Society 14, 3, 200–203.

Collins, N. (1974) Nodalings. Prose score. Available at http://www.nicolascollins.com/texts/nodalingsscore.pdf (LV 9/8/2021))

Collins, N. (1976) Pea Soup. Prose score. Available at http://www.nicolascollins.com/texts/peasoupscore76.pdf (LV 9/8/2021)

Eldridge, A., Kiefer, C., Overholt, D., and Ulfarsson, H. (2021). Self-resonating Vibrotactile Feedback Instruments ||: Making, Playing, Conceptualising :||. Feedback Musicianship Network. Retrieved from https://feedback-musicianship.pubpub.org/pub/kl8m5o5y

Lazzarini V., Yi S., ffitch J., Heintz J., Brandtsegg Ø., and McCurdy I. (2016) Øyvind Brandtsegg: Feedback Piece. In: Csound - A Sound and Music Computing System. Springer. https://doi.org/10.1007/978-3-319-45370-5_18

Nielsen, J. L. and Svensson,U. PP. (1999) Performance of some linear time-varying systems in control of acoustic feedback. The Journal of the Acoustical Society of America. Vol 106, no. 1, pp, 240-254. https://doi.org/10.1121/1.427053

Sanfilippo, D. and Valle, A. (2013). Feedback systems: An analytical framework. Computer Music Journal 37, 2, 12–27.

Imprint

Issue
#3
Date
31 January 2022
Category
Review status
Double-blind peer review
Cite as
Brandtsegg, Øyvind. 2022. "Making a pitch map for a vibrotactile feedback instrument." ECHO, a journal of music, thought and technology 3. doi: 10.47041/NXXZ9357

Leave reply

Your browser does not meet the minimum requirements to view this website. The browsers listed below are compatible. If you do not have any of these browsers, click on the icon to download the desired browser.