The Kinect sensor represents a revolution in the use of motion capture. Although the technology it is based on is not entirely new, it is the first motion capture device to be aimed at a mass market, with a price tag to match. Whilst intended by Microsoft as a gaming device for the Xbox 360 platform, the software used by the system was ‘hacked’ with unprecedented speed – within a month of release. The Kinect can now be used with most mainstream computer platforms, and with an increasing range of software environments.
This has facilitated the explosive growth of a vibrant and imaginative community of Kinect (ab)users, working in engineering, architecture, medicine, robotics and many fields within the arts and sciences. For the purposes of this book, the proposed chapter will focus on the use of the Kinect within digital, sound and performing arts. Here the Kinect has proved to be an (almost) ideal device for interactive work because of its low price and ease of use. Whilst ‘traditional’ motion capture systems have been used in such projects in the past, this has been extremely costly, and the technology requires a level of technical expertise and support beyond the reach of many artists and arts organisations. Furthermore, such systems have severe limitations for live performance in particular due to physical constraints (restrictions in the use of space or in terms of performers’ movement). Whilst the Kinect has limitations of its own, it effectively addresses many of these problems.
The chapter will discuss the implications and possibilities of the Kinect within this sector in a broad survey, before detailing a number of current and emergent projects which illustrate the range of uses to which the device is being put to within the field. In order to benefit from the perspective of first hand experience, particular attention will be paid to two projects with which the author is directly involved: The first, Danceroom Spectroscopy, is a science visualisation project developed by Dr David Glowacki, a computational chemist at Bristol University. The author has directed a series of events at Bristol’s Arnolfini exploring possible crossovers with this project and the arts community, within installation and (dance) performance contexts. The second, me and my shadow, is commissioned by MADE (Mobility for Digital Arts in Europe) and is a project combining motion capture with telepresence, with Kinect-based portal installations in London, Paris, Mons (Belgium) and Istanbul allowing users to interact in real-time within a shared virtual space.
Other projects to be covered in the chapter are subject to change – since this is an emergent area of practice and many interesting projects are currently in process – but suitable candidates would include a/v@arts’ Hiver Numerique, Mark Cypher’s Proposition 2.0 and the Synapse (musical interface) project.
The chapter will conclude with an attempt to draw out commonalities between the case studies presented in an endeavour towards an early definition at this emergent field and an exploration of ideas towards future development.