One of the key elements of the piece I’m working on at the moment – braneworlds – is that the ensemble of seven musicians is divided into four groups (three duos and a solo). That in itself is perhaps not unusual, or particularly interesting. What I think is interesting (or could be interesting) is that each group will be running in an independent tempo. To make this possible each group will play from a clicktrack playing in one ear. The clicktracks must be synchronised, even if they are playing independent tempos. This means that the playback of the clicktrack has to be via a single audio file but mixed quadraphonically – each clicktrack is assigned to a different channel so that each performer only hears their relevant clicktrack.
I’m still in the process of investigating the exact technology I’ll need for that – my main concern at the moment is having headphones that will reach to everybody!
But besides the technological dimension, I’m finding the compositional implications of this basic setup quite interesting.
The first implication – and this is quite significant – is that, since everyone has their own clicktrack, there is no need to have a simple temporal relationship between the groups (such as 2:1, 3:2 or 4:3). Fairly early on in my planning stages, I decided that the ratios between the different groups in this work could only be of prime numbers. What this means is that given a section of 75.5 seconds, one group will divide this by 17, another by 23 and yet another by 37 (let’s say there’s only three playing at this point). Dividing these into the total duration, this means that the first group will have a structural bpm of 13.51, the second group 18.28, and the third 29.4. Of course, on the surface level, these slow bpms can be subdivided – and must be, in order that they yield faster, more interesting rhythms.
While this method ensures independent tempos with a certain proportional relationship, if they are subdivided differently, the end result might fundamentally obscure the initial ratio. It is for this reason that I decided that the structural bpm represents the ‘bpm of the mean event density’. This means different things in different groups. In Group I (solo flute), an ‘event’ is a change of fingering in the isofingering sequence; in Group II (flute and percussion) an event is defined by a decrescendo contour; in Group III (clarinet and piano) it is a change in chord within a ‘consonant atonal’ chord sequence; in Group IV (cello and guitar) it is a crescendo contour.
This is to say that, for instance, Group II, if it divided the section into 17 events, would decrescendo at an average bpm of 13.51. Anything else that happens within that time can be varied (pitches, different rhythmic subdivs), but that decrescendo is fixed as the event-type of that group.
For each section of the work I choose a kind of a ‘density window’ or ‘tempo frame’, which basically just gives a span of prime numbers. Here’s the list:
As you can see, within the logic of the work described in a previous post, Group II is always trapped at the slow end of the density spectrum, and Group III is always at the fast end. Groups I and IV are variable and in each section I will decide whether I wish one or both of them to be closer to either of the fixed groups.
I say mean or average bpm or density, because while there are some sections that have a regular event rhythm, many others (the majority in fact), shape the time according to a gradual acceleration or deceleration of the event density.
How this is done is to choose, for each group, a multiplier of the basic bpm (its subdivision), which will determine the speed of the basic surface pulse (as opposed to the event pulse). If, for instance, there are 23 events in the 75.5 second section, and this is multiplied by 7, that means there will be a total number of 161 basic ‘surface’ pulses in the section, with a bpm of 127.95. This becomes the tempo of the section for that group.
How I create the accelerando or decelerando structure in a section is then more or less how I proceeded in warped passages. Firstly, the number of events translates (in simple cases) into the number of bars. I then create a particular linear or exponential function (for instance y=x^1.5) with this many x values. To map this set of proportions to the number of surface pulses in the section, you simply divide each y value by the sum of all the y values and then multiply the result by the number of surface pulses. Rounding these to the nearest whole number, you get a bar structure with expanding or contracting bar lengths. If these bars are then treated as ‘events’ you evidently get events of greater or lesser duration within the one section, but they oscillate around a basic mean event density.
This means that often the tempo difference are not going to be felt at all like a structural polyrhythm, but instead as almost totally independent streams with different statistical structures (especially if tuplet structures overlay the bar structures). At other times, however, I want to show clearly that there are multiple tempos happening based on obscure proportional relationships – that’s when I keep the event bpm entirely regular and reduce the surface density.
One of the extreme implications of this is that I have jettisoned the use of a full score for this piece. It would be impossible to represent these complex ratios with any accuracy while writing the score in Finale or Sibelius, and it would be extremely time-consuming and painful to do so by hand. Nor is a full score necessary. Instead there will be four different parts. This is a bit of a pain, but I’m getting pretty good at flicking between various finale files…
In a future post I will talk about how the graphic representation of the clicktracks in the Logic Pro file I am creating helps me to identify temporal coincidences or near coincidences and structure things like dynamics and melodic climaxes according to these.