Home

Introduction

Watch an introductory video about Flock (5 minutes).

In Flock, music notation, electronic sound, and video animation are all generated in real time based on the locations of musicians, dancers, and audience members as they move and interact with each other.

Computer vision software analyzes images from on overhead video camera to calculate the position of each participant.  (Everyone wears lighted hats to facilitate efficient and reliable tracking.) The software then generates music notation for each saxophonist based on that location data; the notation is sent wirelessly to a display mounted on each player’s instrument. The notation sometimes displays music on conventional staves but often utilizes graphical contours, along with pitch labels, dynamics, and articulations, to guide the musicians’ improvisation. The music played by the saxophonists ranges from pointilistic bursts and slowly-changing drones to rhythmically dense textures full of sudden register shifts, undulating arpeggios, and multiphonics.

A variety of algorithms are used to generate the notation. Sometimes, each participant's x and y position generates a note of corresponding measure position (x) and pitch (y). Other times, the distances and angles between saxophonists and other participants generate the notes; as more people come closer to a saxophonist, his real-time music notation becomes denser and more complex. And often, participants create motion trails on the notation as they move over time. Dozens of other algorithmic parameters control everything from dynamics and articulations to pitch-set quantizations and point clustering. During the performance, a graphical interface steps through structural changes and enables live tweaks to the algorithms.

Participants' positions also generate real-time projected video animation. The animation shows a 3D representation of the performance space, highlights the music as performers play, and visually connects each musician to the participants who generate their notation.

Position data also generates electronic sound. Each participant or group activates a single musical event in each measure, exciting a physical model of a plucked string or struck percussion instrument or resonating a partial of a spectral sound model. Small position variations, as well as parameters such as cluster size and velocity, subtly change the timbre of each sound; larger position changes affect the sound’s distribution to the eight-channel speaker system.

Dancers help facilitate the audience's participation in the performance, inviting them out of their seats, organizing them into small groups, and distributing simple printed instruction cards. By inviting the audience to help create the unique music and visuals for each performance, Flock seeks to reconcile concert performance with the dynamics of collaborative creation, multi-player games, and social networks to create an engaging live event.

Flock is designed for presentation in a blackbox theater, art gallery, or other space with flexible seating and controlled lighting. Each performance can accomodate approximately 100 audience members and lasts about an hour. Sections of Flock can also be presented individually as a gallery installation or in a conventional concert venue.

Flock was commissioned by the Adrienne Arsht Center for the Performing Arts in Miami, with additional support from the Funding Arts Network, the Georgia Tech Foundation, and Georgia Tech's GVU Center. It was premiered in five performances in Miami in December 2007, during Art | Basel | Miami Beach, which were produced by iSAW. It was subsequently presented in four performances at 01SJ in San Jose, California in June 2008, with the Rova Saxophone Quartet.

copyright (c) 2007 by Jason Freeman