real-time interactive texture - TeroBuru : performance/sound/interaction design - Opiyo Okach
In the processes I’m currently exploring, the place of technology ranges between enhancing, mediating & complementing certain aspects of interaction between the dancer’s body and the sonic, visual or spatial environments of the performance. The process also explores possibilities for audience participation. The technology is not just a technical tool but a creative partner of the process.
This mediating consists of capturing motion data with depth sensors; image and video with camera; and sound with microphones or other devices. I use different hardware and software tools to bring captured image, sound and data into the computer and media media engine. In the media engine I use captured motion tracking data to control different parameters in the manipulating of existing and generating of new media (sound, image, video). I then use captured images and video to create visual texture. I also set-up captured audio frequency data to control dmx values. The processed and generated media content (sound, image, video) as well as control data such as OSC or DMX values are then output to other devices, projection, monitoring and lighting systems.
The whole of this workflow happens in real-time. In this sense the dancer and the media artist interactively collaborate in shaping the performance instantaneously.
Limitations
While the system I’m developing still has technical limitations in terms of accuracy, latency, processing and rendering capabilities or even reliability I have found ways of working with it to achieve creatively satisfactory outcomes in lab situations.
While optical depth sensors offer a discreet and ingenious way of tracking human motion their skeletal tracking methods seem to have a number of limitations. From a choreographic perspective what would be a good base as body input would be the possibility to discreetly and accurately 3D-track detailed articulations of specific body parts (neck, shoulders, spine, pelvis etc) in real-time. This would equip the dancer with a tool with which they can be as free and dextrous as they are with their body. Harnessing and extending the dancers intelligence and creativity beyond their body.
Artificial Intelligence
I’m curious as to what role AI would play in the process described above. Would it replace or be an alternative facilitation system. Would it be a complementary tool that enhances my current system? Would it be an entirely separate tool that opens up completely new paths of creativity.
My current non-AI creative technology system consists of three processes:
- acquisition of data (sound, image, video & motion)
- use of data to manipulate media and generate new content
- outputting media content and interaction data
At what stage of these processes could AI intervene and to what extent. Acquisition? Manipulation? Generation? Or should I skip these processes altogether and just feed it examples of my final output from which, GPU performance permitting, it would mass generate thousands of variations of the same, and even more, in the style of whoever I might choose.
African artist?
I smell the question coming a thousand miles off. I think of africa as the places where multiple geography, culture, religion and language converge into an instance of ongoing live history that the dancing body, consciously and subconsciously, inhabits and negotiates in every moment. This is the conceptual construct within which my work is situated.
One cannot truthfully split the body into traditional, modern, future etc. Just as much as one cannot ignore the importance of heritage in identity making and cultural diversity. I believe working with AI can open up ways of conserving and engaging with traditional dance & corporal heritage in ways that have not been conceivable with existing technologies.    
Back to Top