Immersing the audience/participant in an evolving atmosphere of light and sound, Entanglement Witness is a responsive environment, a transitional space between the real and the virtual world. Moving through the environment, participants actively engage with sound and their own transformed image, while interacting with an avatar that exists in a parallel world of light.
In a darkened room, the audience walks through a triangular-shaped space built from three 2 x 3 meter grey video screens; the video is the only light source. There they meet a video avatar, played by Cummings, who beckons them into the screen environment. An infrared camera picks up the images of the audience for image processing, while analyzing their movement to control aspects of the sound. The 20-minute installation is in six sections and runs continuously, accompanied by a hypnotic soundtrack in surround sound.
In several sections silhouetted images of the audience are captured and filled with moving water or moving light. The projectors are positioned so that participants may cast shadows onto the screen, or have their physical bodies “painted” with the moving light. Active movements will cause ripples in the sound. The effect is to imply a complete world, where audience members are invited to interact with each other and with the avatar, breaking down the barriers between the self and others.
Cummings appears life size, standing on the floor eye-to-eye with the audience. She earnestly tries to “make contact” with the audience, initiating an empathetic connection. However, the avatar’s movements behave by their own laws of physics, as they were shot using stop action techniques. The result is an uncanny image, with the ability for parts of the body to move at hyper speed, or for the entire body to hover and float.
Quantum theory shows that certain objects can become linked by a mysterious process called entanglement. Particles that become entangled are deeply connected, regardless of the distance between them. As artists, we use this concept to ask questions about our relationship with technology and with each other. How “entangled” are we with our electronic devices? How connected are we with people at a great distance, through email, sound and video? Is it possible to become entangled in a virtual/projected image?
Entanglement Witness creates many visceral connections that blur the boundaries between the physical and virtual: bodies creating sound, the “filled” projections of the audience and of the avatar, audience shadows on the screen and live bodies colored with moving projection light. At various times, all members of the audience and the avatar are made of the same material. There is a liberating loss of physical detail in the silhouette; what is left is the expressive outline of the body in social interaction with real and virtual characters.
Research
Artistic Research was inspired by three scientific ideas. First, from quantum physics, we adopted a poetic notion of entanglement witness to investigate how technology enables us to connect with other people, and their virtual representation, “at a distance.” Second, we created an immersive, “mixed reality” environment, which blurs the separation between the real world and the virtual world. Finally, we were interested in current research in neuroscience regarding mirror neurons, which neurologist V.S. Ramachandran calls “empathy neurons,” as they give us the ability to feel what another is feeling, and thereby understand their internal experiences. This works several ways in the installation: the relatively intimate space inside the screened area places the audience close to each other and to the avatar. The “mirroring” experience happens inside the room between participants, on screen between participants, and with the avatar, who is designed to foster an empathetic response.
Technical research for this project includes image capture of the body using infrared cameras and background subtraction; lumakey and chromakey techiques to “fill” the silhouettes of the audience with moving video; motion sensing using measurements of frame difference; motion sensing data controlling audio processing filters, synthesis engines and sample playback; surround sound; and three-channel, immersive video.
Back to Top