me and my shadow is an installation project combining telepresence and motion capture. I think most people know the second term – it means capturing the movement of the human body to control an avatar or character (usually). The first, ‘telepresence’, basically means full-body teleconferencing. Skype really, but bigger, and usually higher bandwidth. Here of course we weren’t sending normal video data, but a complete 3D ‘capture’ of the body.
It’s first showing (I hope there will be more!) happened in July 2012, with installations in London, Paris, Brussels and Istanbul. Each one was quite a simple little box (I called them ‘portals’) designed for a single user. You could go in and instantly become a 3D avatar (not an arbitrary shape but an actual 3D capture of your body) in a virtual space, where you could meet and interact with users in the other three cities.
Tell us the starting point for the collaboration?
This was kind of provided for me really: It was an open, Europe-wide call for a thing called MADE. MADE stands for Mobility for Digital Arts in Europe, and it was a collaboration between four arts organisations: body > data > space in London, the Centre des Arts in Enghiens les Bains (Paris), Transcultures in Mons (Belgium) and boDig in Istanbul. The brief was for a project that could explore ‘arts mobility’ and include all four partners. At the time, my wife had just given me a Kinect for Christmas, and I’d just thought of the idea of ‘motion capture telepresence’ using it. This just seemed like the perfect opportunity. I think it was a good match for the commission, which I guess is why I got it!
I worked very closely with a programmer, Phill Tew. Meeting him was a stroke of luck – I met him literally a week before I got the commission, on another project (danceroom Spectroscopy). Even though I’d only just met him, I had a very strong instinct he was the right man for the job. I honestly don’t think anyone else could have done this in quite the way he did, so I feel very grateful!
What tech/programming languages/platforms do you use and how do they make it work?
The main tech we used was the Microsoft Kinect. Many people know about this – it’s a rather amazing gadget that does the 3D capture part – kind of a webcam that sees in 3D (but more than that). It was designed for the Xbox360, but many people are using them for other things – not just games, but robotics, dance, medicine, art, music.. We had some technical challenges to use it: it doesn’t really capture a full 3D person, but only one side of them – we used three of them in a triangle formation to build up a complete body – combining the information from all three is really tricky! Also – strangely – although the Kinect was designed for games, which most people play in their living room or bedroom, it’s pretty lousy in a small space, and – especially if you want to capture the whole body – it only really works if you’re a reasonable distance away. We used some wide angle lenses on them, which presented a whole load of other problems since they really distorted the geometry. Finally, we had to figure out how to send the data over the internet – we had to invent a special compression scheme to do that.
Most of the software was written from scratch in C#. We did make some use of the OpenNI library – this is an open source library, developed by Primesense, who developed the software behind the Kinect. It’s kind of better than the Microsoft software, as well as being open source! We also used Max/MSP for the audio.
Generally, part of the ethos of the project is that it should be quite ‘cheap’ – the kinects are quite inexpensive really, certainly compared to full-on MoCap systems. Other than that we use really standard equipment – a decent, but not staggeringly high-end PC, a high-end gaming surround-sound system, a projector.. the idea is that it should be quite ‘scaleable’ – we’ve only had 4 portals working so far, but there’s no reason there couldn’t be 10, 100, 1000!
Can you describe the process of development and your team dynamic?
This was quite complicated. We had four two-week residencies to do the main part of the development for the project, one hosted by each of the four partners, so one in each of the cites – Istanbul, London, Mons and Istanbul. Each of these had a main theme: so Istanbul was the motion capture and interactivity – we worked mainly with dancers here (as ‘movement experts’) and developed the way the avatars were formed and moved. We had the equivalent of one portal here, so didn’t need to worry about the telepresence aspect at all. In London we had a great space – the main rehearsal studio of the National Theatre, which is really long. This allowed us to set up all four portals next to each other, so we focused on the interactions between the portals – ironically, it’s much better to do this all in one place in the early stages! We also ironed out the navigation, which was one of the biggest challenges of the project – letting people ‘steer’ through the space using only their bodies (no controllers) is actually quite tricky.
In Mons we focused on the aesthetics of the project, going back to only one portal. This is also the first time we used the three Kinects together and the wide-angle lenses. I also developed the sound here, which I’d left til quite late. As my background is as a sound artist I think was pretty confident this aspect would be OK, so I did the bits I was more worried about first. In Paris we focused on the actual networking concerned – the organisation there (Centre des Arts) were able to actually host the server, and have a super-fast 100MB connection. We set up all kinds of tests running between the four places – there were lots of challenges as the connectivity in each place was different. But we got it all up an running in time for the opening – just!
The team dynamic was pretty fluid really – Phill and I worked together solidly all the way through and had a really good working relationship I think. In theory, Phill was ‘the programmer’, but he’s also an artist and he brought some really good aesthetic judgement to the project. Lots of other people came in for parts of the project – like the dancers (in Istanbul and Paris). In London we also had a ‘hack off’, where we invited input (and shared our own findings) with the London hacking community. I should mention here Philippe Baudelot – he was a key person in setting up the whole MADE project in the first place, and he acted as a mentor throughout – he was really great actually, lots of words of wisdom.
What was your highlight or light bulb moment of the whole project?
I have to say it was simply ‘meeting’ someone in the portal for the first time – genuinely exciting to move and dance with someone in a space that doesn’t really exist, knowing they’re actually hundreds of miles away. Even though it was my project, and had been so much work etc., I found this a real moment of wonder.
What are your next steps/new projects?
As I mentioned, there’s no reason there couldn’t be more than four portals. I’d basically like to do it again, in different configurations – bigger, smaller, further apart, closer together.. having put so much work in I’d really like to take it further. It was such a technicallly demanding project that’d it’d be crazy not to do it again. I’d also like to try some variations – working with dancers on the project was great, and I’d like to do something more like a performance than an installation.
Is there any current or recent work that you admire that is in line with your ethos and approach to arts and innovation?
I don’t know if this is the right sort of thing, because I’m involved with it too, but it wasn’t my idea, and it’s not ‘my’ project (I’m just making some music for it) – I think danceroom Spectroscopy is great. This uses some of the same tech as me and my shadow (Kinect etc.), but it’s aim is ver different. It’s the brainchild of Dave Glowacki, a computational chemist, and it turns you into an energy field and lets you play with the molecules and energy fields that are around us all the time but we can’t see (or hear). It’s kind of magic..