Aesthetically, this research attempts to answer the following research questions:
What are the elements that create a theatrically effective combination of performance and real-time manipulation of live or recorded audio and video?
This research question divides into the following sub-sets:
1. how can the respective languages of dance, music and cinema be merged in such a way as to create a performance experience in which a new and hybrid form is created and that could not have been achieved in any other way.
2. how can real-time computer interaction be used so that the stored audio visual material becomes an "instrument" for the performer to "play"? What occurs when video, once seen as fixed in terms of length, speed and edits becomes fluid and liquid in the hands of the performer. What happens when music once seen as fixed and "composed" changes in the hands of the performer who becomes far more like an African dancer making movements and sounds at the same time.
3. how can this audio-visual instrument be used for improvisation? How do the traditions of improvised theatre, dance and music relate to working with a computer that can also be programmed to react unpredictably through the insertion of random functions. How can notions of chance, indeterminacy, randomness and serendipity be introduced in a theatrically effective manner.
4. how can this relate to an audience used to seeing theatre in which technicians and stagehands produce audio visual and stage effects on cue? How can the audience know that the dancer is "moving" the video images rather than this being achieved by an off-stage technician pulling a slider? What can we learn, in this respect, from circus performers and the "deliberate mistake" the juggler who "drops" the ball, the trapeze artist who slips and for the rest of the performance keeps the audience in complete suspense, totally convinced as to the danger/difficulty of the act. In what ways can this be enhanced by the instrument reacting to the audience as well?
5. how can loops and repetition be effectively used in taking the elements of the languages of dance (repeated action/movement) cinema (repeated image) and music (repeated phrase) when triggered in a live performance.
6. how can the intense (and lonely) training that the dancer, the musician and the film-maker put into their craft be surrendered to something new? What compromises are valid? What pre-conceptions have to be let go without sacrificing the essence and quality of the skill and talent that made it into a passion in the first place?
7. What changes in production methodology can be brought about so that interactive work can be produced from the ground up. Rather than the dancer working to a pre-written or pre-recorded music, the film makers' contribution to the project being "tacked on" towards the end of rehearsal and the programmer attempting to lash it all together minutes before curtain how can these disparate talents be brought together, speak each others languages, and collaboratively produce work which enhances the capabilities of all and produces something new.
8. To what extent can new media techniques be used to create "generative" performance which allows the work to develop even as the production continues so that an audience member could see the piece on several occasions and still see something different each time for example: using the working method of the touring stand up comedian.
Six Second Dances [Dying on my Feet]
The project created at Interaktionslabor is designed to explore a new kind of "grammar" of interaction", specifically in regard to interface designs for performer (dancer) and sound / video en vironments. We are interested in how gestures affect music, can provoke changes in the imagery (the image movement projection), and are theatrically effective. The current interactive system that is being developed by the team of designers and programming artists (Paul Smith, Martin Dupras, Jez Hattosh-Nemeth) is meant to build a base protocol (connecting several networked computers running interactive sofrtware patches and taking and analysing data generated by movement of the dancer [Katsura Isobe]. The observation and analysis of movement in an action or performance is measured with several kinds of sensors, the positioning of accelerometers, orient sensors, pressure and bend sensors on Dancer Katsura Isobe's body. The objective is to accurately measure the data created by her gestures.
This data is shared over three computers which respond to her movements with sound and images. The three computers effectively act as an instrument that she plays. The interactive system thus creates an environment in which the live performer "controls" some of the output or effects that an audience or the environment hears and sees (audio and filmic projection, graphics, voice, sonics, live musical instruments also acting as an additional "actor"). In a sense, the physical gesture of the dancer/actor creates the world of the performance. Once we have described the basic interactive system in this way, a number of questions arise that have preoccupied choreographers, composers, digital artists and programmers for a a number of years, aty least since the 90s when interactive art began to gain a presence in the artworld and in some of the performing arts. The issue of "control" is to some extent a technical issue (functionality), whereas compositionally or choreographically the role or significance of "gesture" is of coursed in many cases a dramaturgical and thearical one connected to quality of expression, style, and intention of performance content realisation.
The question of how we address "choreography" in the context of interactive systems performance is going to have to be dealt with, and to some extent this question first arises when we speak of "controllers" in an interface design. The controller function (the kind of movement the arm or wrist makes on which the accelerometer is attached), the measurement of speed, velocity, duration, direction, etc, is something that can be measured according to the programming parameters and the kind of hardware and software used. (e.g. iCube, and various PD and Adobe Director patches that realise the control of data output in video and audio etc). But what is at stake in a performance, looked at from the aesthetic point of view, is a range of particular meanings, an energy and presence, a movement quality or the complexity of a story which are are being created. This or that gesture has this or that significance within a specific context.
Looked at not only from the audience perception point of view, but from the performer perception point of view, "gesture" is part of a sequence of phrases in a continuum of movement/action, and this sequence of gestures must not only be performed but while performed, enacted, improvised, re-created, it is experienced in a particular way by the performer. In a sensitive environment, whether we call it intelligent or responsive or reactive, the knowledge of the responses (types of response, quality of response, patterns of response) appears to be of critical important for the performer. And this knowledge of the responsive system is what we have explored here in the Lab over the past week -- in the daily rehearsals in which Katsura learnt to see/feel/know the image movement and auditory responses that her movement creates.
"Dying on my Feet" uses real-time computing and sensor technology to create an audio-visual instrument with which the dancer can improvise.
Dancer Katsura Isobe created a series of dances for video which are stored on the computer. In performance she wears a variety of sensors: accelerometers, orientation sensors and photoelectric cells concealed in her costume. These measure her movements and gestures and communicate this data via a radio transmitter to the computers.
The result is that the dance video, projected behind her in performance, is entirely in her control. If she remains still, nothing happens. It is her gestures and movements that makes the video run as an integral part of her dancing. The choice of scenes, how long they run and how quickly they are played are made spontaneously by her as part of the improvisation. A random element, introduced into the programming, insures that no two performances are alike. In this performance, video, once seen as fixed in terms of length, speed and edits becomes fluid and liquid in the hands of the performer.
The piece creates a tension between the live dancer and her virtual, on-screen alter ego or "doppelganger". At times the video image is pulled by the dancer like a sort of marionette, at others the large projection overpowers her. There is a complexity within the choreography at times Katsura Isobe synchronises the live and mediated gestures, at others, she breaks free leaving the projected image of herself hanging like an actress waiting in the wings for her next queue. A similar tension exists between the live musician, Martin Dupras, who improvises on electric bass and the musical samples and synthesiser patches held on computer and influenced by the danceršs movements and gestures.
"Dying on my Feet" was created at the Interaktionslabor Laboratory in July 2005. The group is grateful to IKS Saar, the host of the laboratory, to its director, Dr. Johannes Birringer for their support and to the University of the West of England for research funds enabling this work to be produced.
(28 August 2005)
Miranda Rights is:
Martin Dupras: electro-acoustic musician, programmer
Jeremy Hattosh-Nemeth: visual artist, programmer
Katsura Isobe: dancer, choreographer
Paul Verity Smith: film and video maker, programmer
Tim Stephens: photographer and image-maker