4Dmix3 uses a ceiling-mounted camera to track multiple participants moving around within the installation. Infrared illuminators are used to brighten the scene and a visible light filter on the lens prevents projector illumination from disrupting the tracking. The video signal is passed through a blob detection algorithm from the open-source computer vision library: OpenCV. Statistically-tracked positions are sent through the network to a machine running Pure Data (Pd), which acts as the master controller for the installation. Several libraries for Pd are used, most importantly: Audioscape, AudioTwist, and xJimmies. Together, these libraries allow for 3-D positioning of sound loops which emit audio, and virtual microphones which collect sound as they move around the scene. The OpenSceneGraph graphics library is used to organize the 3-D content and visually render the content so that users can see the world that they are playing with. Rich graphical models are designed using 3D Studio Max, and exported for use with the engine. As a result, tightly integrated audiovisual interactions are made possible, including the animation of 3-D graphics based on audio parameters. In the end, the user experiences immersion in a virtual world, where their movements guide virtual microphones through a forest of audiovisual sculptures. The sculptures emit sound that feed the users' mix as he or she travels through through the musical landscape.

The resulting installation invites you to wander around and dance with your virtual avatar in a sound-drenched landscape whose rhythms, in following your every move, will literally stick to your skin. While the sounds collected by movements will combine to create a melody in front of the screen, through graphic emanations images will provide a virtual reflection of the individual in the space.

(See the 4Dmix3 project page on audioscape.org for more details)


photos by Renaud Kasma, Zack Settel



Press release: http://www.sat.qc.ca/communiques/SAT_pressrelease_4dmix3_070507.html