I don’t really know how to categorise this project. In this piece of writing I certainly play fast and loose with my ideas; there is an absence of logic and rigour which precludes any aspiration towards formal credibility for the thinking, or for the conclusions arrived at. My words may wear the clothing of a mathematical investigation of a data modelling problem, but in reality, the whole is more like a stream-of-consciousness poem on the subject of music and geometry. Some of my notions may be misinformed, or even just plain wrong. Perhaps it is nothing more than a documented day-dream. I shall simply call it an “essay” – because to essay means to attempt, or to try. I will try.
Introduction
For some years now, I have been able to call the contemporary music composer Eric Craven a friend. We share common interests, but disparate career backgrounds – Eric spent many years performing and teaching music before taking up composing, whereas I spent my career working with computerised data and information systems, before adopting (as a hobby) an interest in generative music and digital synthesisers. We also share a lay person’s interest in popular science, eagerly consuming any popularised account of the progress of leading scientific theories. Consequently, I was not surprised when Eric chose “Entangled States” as the title of his 2018 CD publication.
In this essay, I would like to share an insight which came to me upon hearing the above mentioned work.
One of my favourite books in the genre of popular science is Julian Barbour’s “The End of Time”. In that book, Barbour argues that time is in fact not a fundamental property of the universe, rather, it is an emergent (yet to us, totally convincing) illusion. The nature of time, whilst a fascinating topic, is not pertinent to this essay; but the major thread in Barbour’s argument employs some rather abstract mathematical modelling concepts, in which I am very interested. I have come to appreciate that those concepts can provide novel perspectives in a variety of modelling situations, and this gave me an idea involving the title of Eric’s CD. I pursued that idea; and in doing so, I discovered some wholly new (to me at least!) vistas of appreciation in a place that spans music and data systems – two of my favourite realms for exploratory thinking. By the end of this essay, I shall claim to have exposed a strikingly beautiful metaphor at the heart of “Entangled States”.
Background
Eric’s tools for composition are a piano, music manuscript paper, and a pencil. I don’t know for sure, but I suspect that he somehow “feels” music with a sense that I do not possess; he – in common with many musicians and composers – understands and appreciates music in a way that I cannot comprehend.
My preferred toolset is a suite of computer software for musical event generation, and a digital audio workstation with a sequencer, and many different digital synthesizers. I see musical events as data, and the protocols which create those events, I see as systems.
Applying a data & information systems perspective to music is bound to be a rocky road. It may be useful to approach my subject cautiously…
Classical Musical Notation
Approaches to documenting a musical composition have evolved over the years. Some of the earliest forms of musical prescription would be barely recognisable these days. I imagine they would only make sense to a very small number of scholars. (See Figure 1).

Various composers have – over many years – experimented with different approaches to documenting musical data, but the most widely used in the Western world is the stave-based system which evolved out of the methods used in monasteries for plainchant. (See Figure 2)

This geometric approach is based upon a two-dimensional (vertical/horizontal) space. The horizontal dimension uses position to encode a sequence (i.e. which sound follows on from which previous sound, reading left-to-right). The vertical dimension uses symbols aligned to a grid of lines in order to suggest the pitch (vibration frequency) of the sound. A symbol higher up on the grid of lines indicates a higher pitch. The words to be sung are also present on the page; arranged in such a way that proximity indicates which pitches are associated with specific syllables of the words of the song.
Early examples may have encoded just this basic data, but the geometric system evolved. Pitches became “notes” – still pinned to a stave, but now five lines rather than four. A variety of complex symbols were introduced in order to explicitly encode (inter alia) note duration, chords, volume dynamics, and the half-tone pitch adjustments or “accidentals” that are useful when working within prescribed subsets of pitches called key signatures.

This system remained essentially the same throughout the so-called “classical” period of music composition. In this tradition, a musical composition is communicated in the form of a two-dimensional geometric pattern with symbolic annotations – a “score”. In analogue form, it constitutes a sequence of instructions, or “messages”, and it is for the the musician/performer to read the score, interpret the symbols, and render the decoded messages into a performance.
This analogue system is very powerful and capable – I can only marvel at the talent of the composer who was able to imagine, and then write precise instructions for, a performance of the “Moonlight Sonata”. (Figure 3)
Musical Events as Data
In the 1980’s, the success of electronic instruments created a demand for some way to allow such instruments to communicate with one another electronically. Thus was born the MIDI (Musical Instrument Digital Interfacing) standard. The history of MIDI and its impact – most notably upon the popular music industry – is fascinating, but not pertinent to this essay. I wish only to focus upon the fact that MIDI encodes a set of instructions or “messages” which, when executed in an appropriate context, can trigger the production of sounds (a performance).
MIDI is a complex protocol, but in simple terms, a MIDI message can represent (for example):
- The pressing of a key on a keyboard (“Note-on”), including a pitch, and measure of the forcefulness with which the key is pressed.
- The release of a key on a keyboard (“Note-off”).
- A command to change the “playback” instrument – say, from a flute to an oboe.
- An instruction to change tempo.
- A time-pulse by which to synchronise devices.

An analogue score encodes messages as symbols and geometry. MIDI encodes messages as numbers.
But MIDI goes beyond analogue scores in a number of ways. It treats instruments and other devices as systems for providing or receiving data about music-related events. For example, a MIDI file may contain a “score” for a musical composition that has not come from the mind of a composer, but has been generated by some computer software; via algorithms, or aleatoric processing, combined with sets of rules limited only by the programmer’s imagination.
MIDI is extremely powerful and flexible, but what I would like to stress here is the notion that underpins it (and any conceivable similar system), namely:
Any musical composition/performance can in principle be represented mathematically.” (my axiom)
This is a bold proposition, and I can already hear purists proclaiming, “No way! A real musical performance is hugely complex, entailing exquisite subtleties of interpretation and expression – the breathing of life into a composition – which no analogue/digital conversion could ever capture, let alone re-create!”
In response, I will argue that in this essay (for reasons which will become evident) I am only interested in principles, not practical outcomes. It may be the case that current technology is unable to use event-based encoding to represent a performance with sufficient fidelity, but this does not mean that such fidelity is impossible, just impractical using current technologies.
Non-Prescriptive Music
Eric Craven’s unique compositional style is called “Non-Prescriptive”. To quote from the introduction to “Set for Piano”:
“The objective of this process is to realign the historical relationship between the composer, the performer, and the performance. This is achieved by allowing the performer licence to determine how the parameters that have been omitted by the composer should be executed. The performer thus assumes greater responsibility in determining the outcome of the music, at the same time becoming an essential component of the compositional process.”
Further instructions to the performer of the work highlighted in Figure 5 include:
- Only pitches are given and they may be played at any octave.
- The pitches may be played in any order.
- The pitches may be repeated or omitted.
- The pitches may be grouped in any way and for any purpose, e.g. for the formation of chords, ostinati, or rhythmic articulation.
- Performances may commence and end at any point in the score.
- The duration of the piece is determined by the performer.

If we think about encoding a non-prescriptive score such that shown in Figure 5 as a MIDI file (in order to create a performance from it) then we immediately encounter problems. There are simply not enough instructions here. What is the Key Signature? What tempo is expected? All the note durations are missing. There are no dynamics, etc., etc… What is presented here could be rendered in a thousand different ways, without breaking the rules of non-prescription.
Does this mean that non-prescriptive compositions/performances break my axiom (that all musical compositions/performances can be represented mathematically?) Upon initial inspection, it seems that such a system could not be modelled mathematically, for the simple reason that so much of the score is undecided until the performer’s contribution is added into the recipe.
Data Representations
For some time now, I have been engaged in a “thought experiment” investigating methods for modelling non-prescription using constructed geometries. In order to approach the subject, I will need to introduce some mathematics. Figure 6 shows a “piano roll” style geometric representation of a sequence of two-note (dyad) chords:
This style of encoding was historically used with perforated rolls of paper to be “read” and rendered into sound via a player-piano. In a musical box, the geometry remains essentially unchanged even though extrinsic curvature is added. We can imagine the sheet rolled into a cylinder or stretched into a disc, and then create it in metal, placing pins where the pitch symbols are in such a way that when the cylinder or disc rotates, the pins strike a metal comb to produce sound. In MIDI-based software, a similar geometry is often used to present what is called a “piano roll view” to the application user.
In Figure 6 above, the horizontal axis shows the passage of time, increasing left to right. No units of time are provided, but we may adopt any reasonable assumption – let’s say that the duration between t1 and t2 represents one second. The vertical axis indicates a pitch, increasing upwards, indicated by MIDI note number.
(Different conventions exist for MIDI note numbering, but I am here adopting the standard wherein C4, or middle C in scientific pitch notation = 60.)
The lower note of each chord is represented by a blue box symbol. The higher note of the chord is represented by a red lozenge symbol.
Thus, the diagram overall represents a sequence of dyad chords played at one-second intervals:
- 60 / 67 Perfect fifth (C / G)
- 60 / 64 Major third (C / E)
- 62 / 65 Minor third (D / F)
- 62 / 67 Perfect fourth (D / G)
- 65 / 69 Major third (F / A)
- 67 / 71 Major third (G / B)
To reconstruct the sequence (in our imagination), it is necessary to sweep our focus of attention from left to right across the diagram of Figure 6 – noting the blue and red symbols, and reading off from the vertical axis the pitch of each note, corresponding to aligned blue and red symbols.
For the sake of clarity, note duration is fixed in Figure 6 – although we could easily employ a system of different symbols to encode duration.
Figure 6 is a closely-mapped model of the real world. In fact, it is very similar in layout to the classical score. Its dimensions (axes) are those of time and pitch. These are familiar real-world measures of actual things, and it is very natural and intuitive to model music in this way.
Configuration Spaces
The geometric view based upon the classical score system is not the only way to model such a sequence of chords. Figure 7 shows an alternative approach:

Figure 7 is not intuitive, and some readers may initially find it puzzling. The chart axes in Figure 7 are different to those in the previous diagram. The vertical axis now represents the pitch of the lower note of the chord (call this Pitch A),and the horizontal axis now represents the pitch of the higher note of the chord (call this Pitch B).
In this representation, time has been relegated – it is no longer afforded an axis all of its own. Time is now represented only by a set of data (t1, t2, t3, …) attached to points of intersection of the new axes.
The Figure 6 method of scanning across the diagram from left to right doesn’t work for this arrangement. In order to recall the chord sequence using Figure 7, we must follow quite a different process. The recollection process for Figure 7 is as follows:
- Locate the first point (t1), then read off the values for pitches A & B along the axes (A=60,B=67) which equates to (C / G)
- Search for (t2) and read off 2 more pitch values (A=60,B=64) equating to (C / E)
- …and so on.
This is a bit like following an imaginary line connecting the data points in sequence – a kind of timeline through the picture, with event data placed at 1 second intervals.
I will pause a moment to emphasise the fact that Figure 6 and Figure 7 encode exactly the same information – this will be very important point for what follows.
It also does not matter whether the score is quantised such that notes fall precisely upon regular beats according to strict tempo. If quantisation is not applicable, then to accommodate this in Figure 6, all I have to do is re-position the blue and red symbols slightly to the left or right; depending upon whether the note is anticipating the beat, or lagging behind it. In Figure 7, all I have to do is replace the annotations “t1” and “t2” with numbers of higher precision – say, “t1” becomes “t1.034”, and “t2” becomes “t1.879”. I can increase this precision arbitrarily, and easily take it to a level beyond the capability of a human ear to tell the difference between sequential sounds and simultaneous sounds.
To progress this visualisation, I can take the Figure 7 representation even further by adding the time dimension back in – as a third dimension. There will then be no requirement for annotations, and each chord can be simply represented by a single point.
However, this document that you are reading is flat, i.e. two-dimensional, so in order to indicate 3 dimensions (two of pitch and one of time), I must “fake it” in the usual way. I will take the vertical axis (Pitch A) and rotate it 90 degrees into the plane of the diagram, where it becomes a sort of “floor” (coloured pale blue in Figure 8). I then replace the previous vertical dimension by a time axis, with time increasing upwards.
Thus we have a an imaginary “box” style 3-D coordinate system in which a dyad chord event is modelled as a single point, and encoded as a set of three values, for example (t1,60,67).
To assist further with this pseudo-3D visualisation, I have added a shadowy base and vertical stem to each “single point” represented by a red blob. The shadows are not part of the model; they are only present in order to enhance the 3D effect of the diagram. The thin black line is the imagined “timeline” – tracking our gaze as we traverse upwards through the time dimension, recreating the sequence of chords.

The diagram in Figure 8 is a simplistic example of what is generally known as a configuration space. I have presented this somewhat prolonged example of modelling the dyad chord data solely for the purpose of introducing (to the non-mathematician) the concept of a configuration space.
Configuration spaces usually do not map directly to real-world experience. They are abstract mathematical constructs, designed to help with modelling complex systems – the individual components of which can be present in various relative arrangements.
The most important thing about Figure 8 is that two pitches are represented by one point in the configuration space. If I have the data for that single point, then I know the value of both Pitch A and Pitch B.
Every possible dyad configuration that could be represented in the configuration space would be represented by a single point, and this is characteristic of such artificially constructed spaces.
The data set for Figures 6,7, and 8 is exactly the same:
- t1, 60, 67
- t2, 60, 64
- t3, 62, 65
- t4, 62, 67
- t5, 65, 69
- t6, 67, 71
So why bother with such a convoluted approach to modelling the musical data?
Well, Figure 6 models the sequence adequately, but Figure 8 – the configuration space model – has something special that Figure 6 does not; namely, the space does not care about the number of dimensions present.
Using a configuration space model, it is possible to model music in such a way that everything about a note event is represented by a single point.
Music can be modelled as points on a single timeline in a multi-dimensional configuration space.
Composition Space
I now ask the reader to imagine a version of Figure 8 which has 128 note polyphony – a configuration space with 129 dimensions (128 notes + 1 dimension of time).
As humans born able to directly sense only 3 dimensions, we cannot hope to visualise this space; but we can nevertheless imagine it, and we can easily record the relevant event data for such a configuration space using a computer.
I choose the number 128 only because that is a typical maximum polyphony implemented in modern electronic instruments; moreover, it easily accommodates even the most unlikely of possibilities – the score instructing the performer to strike all of the keys on a piano simultaneously.
Now, let us add in all of the other information that a classical score may contain. I don’t know how many dimensions would be needed, but I would hope that there are no more than say, 128 different kinds of things (tempo set/change, dynamic annotation, key set/change, accidental, coda begin/end, slur, glissando, triplet, etc, etc, ).
So, by increasing the number of dimensions, I have created a space that can record every possible instruction to the performer at any given point in time. This multi-dimensional construct I shall call Composition Space. Composition space has 257 dimensions (128 describing Note events, 128 describing Annotations, and 1 of time.)
I cannot render a visual image of Composition Space, but I can imagine compressing all of the Note dimensions into one, and similarly compressing all of the Annotation dimensions into one, so that I can show a picture (Figure 9) of a squashed Composition Space which, whilst being nowhere near accurate, at least goes some way towards communicating the idea:

In this highly simplified view of Composition Space, the dark red timeline represents the beginning of the score for Beethoven’s Moonlight Sonata. The blue timeline represents the beginning of the score for Beethoven’s Sonata Pathetique. The lines are different, and yet they share certain geometrical similarities in terms of slopes, directions and curvatures.
Composition Space makes characteristic geometric patterns which reflect a composer’s own recognisable style.
If an imaginary music expert could perceive multiple dimensions, then they could look at this space and say, “Ah, yes, I recognise the red and blue timelines as the works of Beethoven.”
That may seem at first to be a ridiculous notion – but – recall how I previously stressed the importance of the idea that the data for Figures 6,7,& 8 was the same data. The Composition Space data is identical to the data encoded in the classical score. Our imaginary multi-dimension-viewing expert is looking at the very same scores, they are just encoded in a different way.
Now examine the green timeline in Figure 9. It looks very different to the red and the blue. For one thing, it stops abruptly at a specific time. It has twists and turns in the Annotations dimension, but it has nothing at all in the Notes dimension. The green line is the score for John Cage’s composition 4’33”. If a performer followed the instructions encoded in the points of the green timeline, then the audience of the performance would hear only ambient noise – which is exactly what the composer intended.
The acid test of my axiom then, is: “Can a non-prescriptive work such as “Twelve” be modelled in Composition Space?”. Let’s consider the facts:
Firstly, a high-order non-prescriptive score has no data in the Time dimension, because the composer does not dictate when the pitches may sound.
Secondly, the score is utterly devoid of the usual annotations. However, it must have some data in the Annotations dimension – because there is an implied annotation which I will call “Placement”. The Placement dimension does not relate to time, it simply reflects the fact that the analogue score portrayed the pitch symbols at some horizontal left/right position on the staves, and for the model to remain exactly equivalent to the original score, this data must be present, and must be represented by numbers in a dimension.
In Figure 10, I have assigned numbers based on measuring the distance (in pixels of the scanned image) from the left-hand edge of the stave. This approach is arbitrary – any method will suffice, so long as the data encoded can be used in conjunction with a set of rules to reproduce the original score.

Thirdly, a non-prescriptive score will have data in the Notes dimensions – but, those dimensions equate to polyphony, and here we must make an assumption.
We assume that if the composer placed pitches on the staves above/below one another (i.e. in vertical alignment as per the dotted lines in Figure 10, using the traditions of classical scoring) then that was meant to suggest polyphony. For the sake of simplicity, we therefore model that in the data as polyphony. We could be extremely pedantic, and say that strictly, there is no prescribed polyphony here. In that case, we would model the vertical alignments as further Annotation dimensions – but that approach seems to me to be an unnecessary complication for no real benefit.

The bright red dots in Figure 11 encode a high-order non-prescriptive composition.
Recall that the Notes dimensions are squashed into one in figure 11, so I have emphasised polyphonic events by surrounding relevant red blobs with small black dots. Red dots with these black “satellites” are understood to be linked with data in additional dimensions.
It seems that I can accurately model the non-prescriptive score, but it cannot now reasonably be called a timeline – because this view is timeless. There is a line here, albeit a line in the “Placement” annotation dimension. If we ignore annotation and project onto the time dimension, it appears to be a single chord consisting of all the notes in the piece, or to put it another way, a singularity (in the most general sense of that word) and it occurs at time zero.
A high-order non-prescriptive composition can be modeled as a line of points forming a singularity in Composition Space .
Performance Space
What if I wish to encode not just a non-prescriptive score, but a realisation of the score – a performance?
Well, things like pitches, note-on/note-off, tempo adjustments, and changing dynamics are already catered for. I do however need more numbers to represent the circumstances and the environment of the performance. I need sounds and timbres, reverberation and echo. I need sets of numbers for note attack, sustain, decay and release. I need numbers for the pedals of the piano, etc., etc….
But again, in our thought experiment, this is not a problem. As before, we just need more dimensions to hold the numbers that represent these qualities. Let’s imagine there are only another (say) 256 different kinds of factors that represent the nuances of an individual performance. Provided that I can imagine technology capable of capturing the data, I now have a 513-dimensional model that encodes the performance. I will call it Performance Space.
In Performance Space, an actual performance is modelled by a set of single points forming a timeline that encodes a real-world performance, complete with all its subtleties of style and interpretation.
A performance is a set of points along a timeline in Performance Space.
Realisation Space
Now, we almost have enough mathematical power to address the problem of modelling non-prescriptive composition and performance. But there remains the matter of the relationship between the composer and the performer – the “unknowns” in the score, and the freedom of the performer to fill in those values.
Dear reader, by getting this far in the essay, you have persevered; I have indulged in a very unusual thought experiment. I must ask you to fortify – even further – your capacity for tolerance. My imagined multidimensional geometric constructs are strange enough. From now on, it gets rather weird…
Non-prescriptive music is all about the relationship between composer and performer. Crucial to this is how a composition is “realised”. In the terminology of this essay, it is about:
- How we represent non-prescriptive music in a Configuration Space.
- How a structure in Composition Space evolves into one or more structures in Performance Space, reflecting actual realities.
I believe it is possible to model a non-prescriptive composition and realisation in Performance Space. But it requires even more abstraction, and even more constructs.
I want you to imagine something called Realisation Space. It is multidimensional (at least 513 dimensions), and it has all the power required to accurately model any musical event, prescribed or real. But it has something new:
Realisation Space can simultaneously hold a composition, and all of the timelines of possible realisations of that composition.
That is a big leap of the imagination – but it is not so scary if we think about it logically.
I need some way to talk about structures in this space. Sets of lines form planes, and so the shape of a set of possible performances in Realisation Space is actually a multidimensional hyper-plane, which physicists or mathematicians might call a manifold. But for simplicity in this essay, I will continue to refer to sets of timelines, even though we know that the structure of information in Realisation Space is more like a complicated “landscape”.
To borrow a visualisation (again from Julian Barbour) – you could imagine selecting one location in that landscape and applying pressure to it – like pressing a button – and then you would hear the particular polyphonic musical event that is encoded in the data at that point in the landscape. You might also imagine being able to “walk” through the landscape, and as you walk, you hear a possible realisation of the score. Turn a little to the left, and continue walking, and you hear a slightly different realisation.
This space is very full of information. If you imagine the score and all of the possible renditions of an Eric Craven composition all at once, then I am guessing that what first comes to mind is a seething, boiling mass of infinite chaos – a crazy mish-mash, a cacophony. All the points have blurred into a mess.
But I urge you to pause a moment before rejecting the idea as less than useful. This space may appear at first sight to be chaotic, but is it really? I will argue that it is not chaotic. This multidimensional “landscape” in fact has plenty of structure.
For example, there is a boundary here where (inter alia) we find the equivalent of Cage’s 4’33” composition. It has no audible events at all. Along this boundary we find all timelines where the performer chose to sound none of the notes in the composition. It is a distinct, and very clear structure standing out in the landscape. Beyond it, one cannot explore. It is not possible to sound fewer than zero notes.
As your intrepid guide in this alien landscape, let me point out another clearly identifiable boundary. At this particular “hyper-vertex of hyper-planes” we find a junction of all the timelines in which the performer chose to always sound all of the notes exactly simultaneously. (It’s easily achieved on a piano synthesiser – quite how the performer might manage to do it on a real piano, I leave as a thought experiment for ingenious prosthetic designers/engineers!) Again, we discover a clear boundary to the landscape beyond which it is not possible to go. One cannot simultaneously sound more notes than the keyboard has keys.
Between these two towering structures, we find many less extreme ones. In order to make some sense of the landscape, I will introduce another construct.
Imagine that Realisation Space is dark – totally black – but the timelines themselves are luminous. I introduce the idea of a continuous “field” of values such that every point in Realisation Space has a luminosity value representing the probability of that particular event occurring. (The word field used here is borrowed from physics, where it simply means something that has some value at every point in a space.) By the word probability, with reference to one performer, and one particular non-prescriptive score, I mean the likelihood that a particular musical event will occur.
Recall that previously we had Composition Space and Performance Space, and they were discontinuous – pitch events and annotation messages were represented by individual points, but in-between those points there was no data – the timelines we imagined were only a sort of “visual aid” to promote the idea of reading through the score, or reproducing the performance. In contrast, Realisation Space is continuous. Pick a point – any point – and there is data, encoding an event which may – or may not – happen. Here we encounter a modelling problem. A continuous field can be measured at infinitely many points, and so if we wish to continue to to imagine this being modeled by data in some type of computer, then we must quantise the field. We must (arbitrarily) select a “granularity” for the data. Data points cannot be closer together than this smallest difference. Fortunately, it is not difficult to imagine a smallest difference between data points for any of our dimensions- because the human ear and mind can only resolve differences in sound to a finite limit. For example, two “note-on” events that are closer together than one quanta of time difference, will be heard by a human as a single note anyway.
In Realisation Space, timelines that are unlikely to be realised (for example the “Cage” timeline) are quite dark; so improbable as to be almost invisible. Timelines that are quite likely to occur shine out brightly. In between, the light fades away into the darker recesses of unlikely performances.
We can think of the luminosity field as a “function” (borrowing another word form mathematics). A function is a bit of mathematics that transforms numbers. The function accepts as input the multiple coordinates of a specific point in Realisation Space, and it delivers as its output a number between zero and one. A value of zero means that the point is totally dark – the event described by that point cannot/will not happen. A value of 1 means that the point is as bright as it can be – this event must/will happen. Any number in between means the point is illuminated as some shade of grey, the event might happen; the closer we get to 1, the brighter the timeline, and the more likely the events on it are to occur.
With the addition of the luminosity field, the landscape of Realisation Space has quite suddenly revealed a magical fairy grotto of astonishing beauty and complexity. Out of the singularity that represents a high-order non-prescriptive composition, we find exploding out a bifurcated web of twisting, turning strands which meet, separate, explode, contract and join again, swinging and swirling around each other in a gloriously mad dance.
This surprising structure found within Realisation Space arises because each performer realising a non-prescriptive score has a strong tendency to express their personal favoured musical constructions, and this gives rise to the probability function, and the gradients of luminosity. One may refer to it as a style. In exactly the same way that Composition Space revealed geometric patterns reflecting a composer’s “style”, so:
Realisation Space creates geometric patterns which reflect a performer’s unique “take” on non-prescriptive works.
And finally, there is of course another structure that stands out very clearly in this Realisation Space – the score. It is at maximum brightness, and it is a sharp and clear singularity in the time dimension. If you look across the annotation dimensions, you would see the individual data as a line of discrete illuminated points.
This singularity is the source of all the possible realisations in this data set, and you can see timelines of varying brightness emerging from various points within the structure. The brightness is different for each of those timelines, because (for example) the probability of the performer commencing with a pitch somewhere towards the “beginning” (i.e where the placement number is relatively small) is more likely than the performer commencing somewhere towards the “end” of the score, where the placement number is relatively large. Commencing the realisation with a pitch of the highest placement number (the “”last” pitch) would of course be allowed in non-prescription, but it is unlikely – because musicians are usually very much mentally conditioned to read musical staves in a Western tradition, i.e left-to-right, top-to-bottom.
The probability function is of course just a notion. I do not know how to model it with mathematics. It is perhaps just about conceivable that an artificial intelligence (given enough data) could build up a model of a function that would in some way mimic a specific performer. But for our purposes it does not matter – this is only a thought experiment.
Performance/Realisation
Now the final piece of the jigsaw.
Imagine in Realisation Space, the process of a performance. At time zero we have the singularity which (in a multidimensional view) contains all of the pitches in the composition. We also have the illuminated twisting, bifurcating web of of possibilities emerging from it.
Then, the performer begins to “realise” the performance. As this happens, the magical fairy grotto of light crystallizes out into a single timeline of discrete events, springing from just one of the placement points in the score. As time proceeds, one of all the possible performances coalesces, moment by moment, as a lengthening string of bright pearls spanning the dark space. As it does so, all the performances (up to that moment) which did not become reality fade into total darkness, leaving behind the brilliant single “timeline” of the performance that happened in the real world.
It is stunningly beautiful.
To provide some notion of the beauty of this multidimensional space, I have in Figure 11 below again used the “squashed” dimensions visual trick.
I ask you to imagine that time in this picture is transparent – you are looking into the picture through time, so that (even though the image appears flat) the curling, bifurcating timelines of possible performances are meant to be perceived as curving away into the distance of the Time dimension. The brighter, thicker strands model performances which are quite likely to occur, and the dark spaces in between are performances that probably will not happen.

It may be worth pointing out that a “classical” performance imagined in Realisation Space is relatively dull ! Bizarre though it may seem, Beethoven’s “Moonlight Sonata” makes a fairly uninteresting pattern in Realisation Space. The reason for this is that classical performers are granted little leeway by the composer. They may make minor changes in tempo here and there. They may choose to emphasise certain notes above others, or to lean more heavily on certain rhythms – all within fairly strict limits. These factors add a certain amount of “fuzziness” to the timeline, but in terms of the probability function, all we really see is the sharp bright “line” of points that is the composition, surrounded by a haziness whose luminosity rapidly falls away to nothing.
Comparatively speaking, a non-prescriptive realisation in Realisation Space is a glorious firework display.
Entanglement
Before a realisation happens, we have only the score and the probability function. The function provides a field of numbers that spans the entirety of Realisation Space. All possible performances are modelled in there, to an arbitrary (but effective) resolution.
In general, if we have a multi-component system that can be in different configurations, and if we gain knowledge of the state of one part of a system, and if possessing that knowledge allows us to narrow down the possibilities for the status of another part of the same system, then that is called entanglement. It is a special kind of correlation which becomes much more intuitive if we understand that it arises naturally in configuration spaces.
Entanglement in this sense is not the same as quantum entanglement. My imagined Realisation Space is a local, closed system. It models and records all that can be known about a system (the composition and its realisation) to any desired level of precision without any difficulty. There are no complementarity issues involved in Realisation Space, and nothing “spooky” going on.
The Beautiful Metaphor
And yet, I remain entranced.
Most fans of popular science will have read that Erwin Schrödinger’s wave equation entails a wave function that predicts the probability of finding a “particle” in a certain state – for example, the probability of measuring it as being at a certain place. What few non-scientists will have realised is that “place” in that context only means normal space when the system involves just one particle. As soon as two or more particles are modeled as a system, the wave function evolves in a configuration space of multiple dimensions (Schrödinger himself referred to these as Q-spaces.)
If the particles are quantum entangled, then they are one system, and they are described by the wave function at a single point in a configuration space (just like a chord in Realisation Space). It doesn’t matter how far apart in real space the particles are separated, for so long as they remain entangled, the probability of measuring them as having some property is provided by the function. Gaining knowledge of the state of one part of the system narrows down the possibilities for the state of the other part.
Once you appreciate that the particles are being described in a configuration space, then Albert Einstein’s “spooky action at a distance” becomes not so spooky – it is actually quite unremarkable, because configuration spaces typically do not play nicely with our intuitions about the real world.
When I contemplate Realisation Space, with its bright and beautiful web-like structures illuminated by the probability function, the “Entangled States” metaphor in the title of Eric’s CD untangles itself, and emerges resplendent.
Postscript
I cannot help thinking about the Everett/DeWitt “Many Worlds” interpretation of quantum mechanics; and about how it just may be that, in an infinite number of alternative universes, every possible realisation of an Eric Craven composition is listened to, and appreciated, by an infinite number of copies of me.
“What I look for in musicians is a sense of infinity.” Pat Metheny
2018 Peter Vodden