“I hate being called a technologist. It sounds like all I do is plug things in”. Sitting on a comfortable leather sofa in Central London, Daniel Jones is crackling with energy. A prolific maker of extraordinary and mostly sound-based things, Daniel is sitting beside the equally bespectacled Peter Gregson. Gregson is a cellist and composer increasingly known for his insight and work at the boundaries of classical music and together Jones and Gregson made The Listening Machine, a remarkable six month long piece of music that is generated by the activity of five hundred Twitter users with the composition comprised of thousands of phrases recorded specially for the project by Britten Sinfonia. “We might let on who the five hundred are when the project is over. Or we might not, we’ve not decided yet”.
While it is too early in the day for a cocktail, the pair are in celebratory mood. The Listening Machine is a major new commission for The Space, the digital content platform jointly developed by Arts Council England and the BBC and it is beautiful. The core idea of the project fell into place started when Gregson was talking with his friend Abdur Chowdhury, then Twitter’s Chief Scientist, over cocktails in San Francisco whilst there to give a concert. Together they riffed on the question “what does Twitter sound like? “The way he was describing these unimaginably large data sets… it was the same vocabulary that we use to describe music. That’s when it started getting really interesting”. Two years later, The Space commission enabled the project to move out of Gregson’s mind and into realisation. The result – available at thelisteningmachine.org – is a seamless interplay between music and technology where it is not possible to see where one begins and the other ends. “To be honest”, remarks Gregson, “I’m pretty bored of people separating the art bit from the tech bit. And especially when they see me only as the art person and Daniel only as the code person. We’re co-composers. Just that we work with slightly different tools.
The essence of the Listening Machine is that it turns tweets into sound but that simple description belies the enormous complexity of the piece. At a recent event in Manchester where he presented the work, Gregson remembers that “someone came up to me afterwards and told me that they had done something just like it as a hack over a weekend. And I just had to tell them that frankly they hadn’t, they really hadn’t. Yes it’s based on a simple idea but the result is thanks to an enormous amount of commitment and ridiculous attention to detail ”. A regular attendee of hackdays himself, Jones agrees that “there is incredible value in using hacks as a way to blow the lid of what you might like to do and try some experiments. But to make the most remarkable work that we can we also need to move past the hack mindset and take those experiments into a realm of ludicrous ambition.”
The conversation moves onto how there have already been previous Twitter data sonifications and again Jones fizzes with insight. “The actual realisation makes such a difference. Which is why I no longer get put off if i have an idea and I find out that someone’s already done it. A year ago I would have bemoaned the fact it’s already been done. But now I realise that there a a million different instantiations of the idea and yours is going to be radically different and if you pay attention to the execution and the production quality it will be effectively a whole different thing. The idea that something has already been done is a really important thing to be able to mentally escape from”.
So to the thing itself. The Listening Machine uses the tweets from 500 UK-based Twitter and uses them as the basis of a continuous six month piece of live music that will end in November this year. While containing some random users, the majority of the 500 were chosen as a representative group across the eight topic classifications used by the BBC News website (e.g. sport, health, politics, business, entertainment & arts etc) and when any of these users tweet, that message is analysed for sentiment (positive, negative, neutral) and prosody (that’s rhythm of speech if you were wondering). The sentiment is used to determine the mode and tone of the musical output and the prosody of the text is decoded to become the score itself. And by preserving the rhythms and dynamics introduced by punctuation and stress, the system can produce surprisingly structured-sounding motifs from otherwise simple sentences. And the overall result? A piece of generated but genuinely live music that is the result of nothing less than alchemy – turning the noise of twitter into a thing of beauty that draws you back to it again and again.
“All composition is based on rules” Gregson interjects. “What’s different about this piece is that we control the rules but not the tweets which ultimately determine what the piece sounds like at any one time. And since we had to make a piece of live music that lasts six months, to make it sustainable it needs enough variation so that it stays interesting, but not too much so that it results in a big brown mess. It isn’t six months of music, it’s six months worth of “potential music”. That’s why I set the orchestration the way it is, so as not to confuse it further with too many elements. And it works!”
Recognising that the vast majority of data sonifications tend to work with electronic sound outputs, Jones and Gregson were insistent that they work with real human elements as the beating heart of the musical output thereby matching the humanness of the thoughts and activities of the Twitter users. They worked with twenty musicians from Britten Sinfonia to create a database of sounds which matched the vast number of potential elements based on Jones’ sophisticated text and sentiment analysis system. “We ended up having 43,801 audio files coming in at 60 gigabytes…which is quite a lot.” Sharing a joke at the scale of the work, Gregson adds that it “was ironic that the process had the musicians, myself included, working like machines, often just playing single sounds in a way that made no sense at the time of recording. The result is something much more fluid, much more musical”
Spending time with Jones and Gregson is a constant reminder of how artificial the divisions between art, technology and digital really are but nevertheless those divisions do remain in the minds of many. When asked about the challenges traditional arts organisations face and how they tend to engage with them as creatives, Gregson smiles and shrugs. “Organisations and institutions really can no longer afford to look at this thing called digital and just tack it on the side. There’s a lot of emphasis today on audience development which implies that the reason that people aren‘t going to performances is because they, the audiences, are wrong. Surely the focus should be much more on making more relevant work? Hopefully projects like ours show that the experience of work can still be of high quality whilst being very different.”
“People look at my background as a creative programmer and just see the code” adds Jones. “There’s just no way for me to get an ‘in’ with an organisation because all they think I can do for them is build their website. To engage with people like Peter and me you have to collaborate. And you have to collaborate way outside of your comfort zone.
Born in Edinburgh in 1987, Peter Gregson is a prolific cellist and composer, the Artistic Advisor to the Innovation Forum at the New England Conservatory, Boston and claims a professionally aggressive carbon footprint. He has collaborated with many of the world’s leading technologists, including Microsoft Labs, UnitedVisualArtists, Reactify and the MIT Media Lab. You can find him at petergregson.co.uk and @petergregson. Daniel Jones is a doctoral researcher at Goldsmiths and has published work on music theory, creativity and systems biology. He has exhibited digital work internationally and his award winning projects includeThe Fragmented Orchestra, Papa Sangre & Nightjar. You can find him at erase.net and @ideoforms. Together they made thelisteningmachine.org for The Space which is available through the magic on the internet until November
Inside The Machine
For the technically minded, here are the elements that Daniel Jones used for the development
- Ableton Max for Live: hosts the system-wide audio fragments and generative Max/MSP patches
- Python: code underlying all of the text analysis and pattern generation (see below)
- Kontakt and Kontakt Memory Server: 64-bit hosting for audio fragments recorded with Britten Sinfonia
- Flash Media Live Encoder: for streaming to the content delivery network
- nltk: for sentiment analysis, text classification and pronunciation extraction
- glyph-rpc: for communication between internal components
- python-twitter-tools: for interaction with Twitter user streams
- isobar: to algorithmically generate musical patterns
- jQuery: for HTML5 interaction
- HTML5 Canvas and excanvas: for live visualisation, backwards-compatible with IE9
- jwPlayer: for cross-platform media playback
- pjax: for navigation between pages that maintains a constant media “play” bar