About

Anyone who has ever watched a group of people create music together knows that it often involves a great deal of interaction and communication between the performers. If we want to study exactly how this interaction works, we need to develop the appropriate tools. Thanks to funding from Cambridge Digital Humanities, during this project we have begun the development and testing of a new software platform that aims to facilitate innovative research into interactive music-making and, indeed, other forms of human communication.

When used in an experiment, the software operates in a manner similar to a conference call made on a telecommunication platform such as Zoom or Skype, albeit one where all of the participants are together in the same room. Two or more musicians are seated opposite from each other, with direct visual contact between them prevented through the use of a physical barrier. Instead, they communicate through a closed-circuit audio-visual system, consisting of a webcam, monitor, headphones, and musical instrument, each connected to an external computer running our software.

Participants in our experiments hear each other’s performance using headphones, with visual feeds from their partner’s webcam displayed on a monitor in front of them. At any point during a performance, the researcher may use our software to introduce a manipulation into these feeds. Manipulations currently implemented in the software include the addition of delay (or latency), echo, pitches (e.g., artificial ‘wrong notes’), and automatic face and body occlusion. Our software enables the researcher to manipulate the cues and signals that enable musicians to interact with each other and study the processes that underpin this interaction.

The work we have conducted so far using our software has considered how musicians can successfully perform together over the internet. As a result of the time taken for data to be transmitted over a network, online musical performance involves dealing with a certain amount of delay (or latency) between when one musician plays a note and when another can expect to hear it. How much latency performers experience depends on numerous factors, including the quality of their own connection, and it will often vary over time due to a phenomenon called ‘jitter’.

Using our software, we analysed how different amounts of latency and jitter affected the performances of ten professional jazz musicians – five drummers, and five pianists – as they performed together as part of a duo. The results from these studies are still being analysed; however they suggest that jitter can affect numerous aspects of a musical performance. This can include the tempo and overall stability of a performance, as well as how easy musicians find it to communicate with each other.

We look forward to presenting these results in full at a variety of conferences over Summer 2022, including those hosted by the Society for Education, Music and Psychology Research and the International Conference of Students of Systematic Musicology, as well as expanding the functionality and utility of our software further in the future.

A professional jazz drummer and pianist perform together, using our software to communicate.

Convenor

  • Huw Cheston, PhD Faculty of Music

Huw Cheston’s PhD research focuses on using empirical and psychological methods to gain insight into the communicative and interactive processes involved in the performance of improvised music, especially jazz. He also performs widely across the UK as a guitarist. He received his Master’s and Undergraduate degrees in Music from Oxford University, graduating from both programmes with the highest overall mark in his cohort. He currently holds a Lewis Research Scholarship in the Humanities and a Vice-Chancellor’s Award.

Cambridge Digital Humanities

Tel: +44 1223 766886
Email enquiries@crassh.cam.ac.uk