Lab 2
Unstable Landscape / SPIFF
project plan by Marlon Barrios Solano
SPIFF is conceived as a generative piece in which rules of interaction and a computer program are set into motion with some degree of autonomy. The piece emerges in the interaction.
The experience is built fundamentally
in the two processes: manipulation
(transformations/processes), and feedback (recursion). Memory is algorithmic;
the experience is emergent.
In envision this project as a continuation of my project Unstable
Landscape and SPIFF.
The basic premise for this system/environment is the absence of any pre-recorded
footage or sound samples. They will be totally generative at the level of
behaviours: the only raw materials are light, information, electricity and
the materiality of the computers and the visitors' bodies.
SPIFF (dance, video and sound output, and interaction) will be designed based
on the principles of subsumption architecture or bottom-up approaches in robotics.
Although the processing is centralized in each computer, I will attempt to
simulate a bottom-up architecture al the level of software, interface design
and compositional output as sound, image and dance.
This means that each computer with its interfaces and outputs will be designed
as unity that will improvise a set of media/processing behaviors in the level
of image and sound based in their pre-programmed simple intelligence.
The system will make the "piece" emerge,
based in the rules of interaction, generation, feed-backs and combinatorial
instructions.
The computers will be locally net-worked in order to coordinate their behaviors
as performers that create a coordinated whole.
In that sense, the performance system or installation will be conceived as
bottom-up robotic multimedia environment in which I attempt to study the deployment
of improvisational and therefore compositional intelligence in three main
conceptual levels across all the elements of a system that performs a contextualized
task: agency, autonomy and control.
The outputs will never be the same and will be in constant flux. Light, information
processing and physical actions will couple.
Sound and imagery will fluctuate between abstract matrix compositional patterns
and bio/anthropomorphic patterns only based on live video input (light), sound
generation and sound and visual feed-backs. In this way, camera movements
and media outputs will interact with each other and develop emergent properties.
Video will be programmed to be a mutable composition generated in real time
in 2D and/or 3D renderings. The computational/dance system will evolve following
parameters of relation with the users and in between the compositional elements.
For Göttelborn, I would like to experiment with motion tacking, laser
beam tracking, sensor driven media and data mining to create a participatory/performance
experience with the use of the keys, clothing baskets of the changing rooms.
The different compositions will make different patterns or color and sound to emerge. I imagine the changing rooms with their keys and baskets as an interactive gateway for a playground of hope and fear.
I would like to bring seismic data and GPSatellite information as an uncontrollable
variable, to modify in real time the imagery and sound creating a kind of
technological cosmology and sense of ongoing passing of time and boundless
space. I would like to illustrate, the necessity of relinquishing of control
and the determination needed of executing a task as a preparation for the
known darkness of the mines.
I imagine one computer with a camera in each room/cell that will track motion
of the baskets as a grid. The users will play with this giant instrument creating
a 3D and soundscape/topography of speed and disappearance. A dance performance
happens at different time.
In a sense the environment will be generative and interactive. The visitor
can only have a relative impact.
I will be using Jitter and optic tracking systems, projectors, teleo systems
and sound amplifiers.
Allgemeine Beschreibung des Labors
(english)
(español)
(francais)