Concert     Installation     Film


WHATAREYOULOOKINGAT


GENERAL DESCRIPTION

WHATAREYOULOOKINGAT is an audience-involved, interactive meta-instrument for data sonification. The overarching goal of the performance is to extract data from the audience emulating a type of government-sanctioned privacy infringement, manipulate and share the data between the three primary performers, and present that data in the form of surround sound audio and interactive, three-channel video projection.


PROGRAM NOTE

TRIGGERFISH

Performs strong and soft selection of target's real-time activity: indexes every e-mail address seen in a session by both username and domain, logs every file seen in a session by both filename and extension, scans client-side HTTP traffic, collects every phone number and associated content from digital cell phone calls, and aggregates chat activity to include username, buddylist, and machine-specific cookies.


img/wayla1-thumb      img/wayla2-thumb      img/wayla3-thumb

SECONDDATE

Exploitation technique that takes advantage of web-based protocols and man-in-the-middle position. It influences real-time communications between client and server and can quietly redirect web-browsers to FOXACID malware servers for individual client exploitation. This allows mass exploitation potential for clients passing through network choke points, but is configurable to provide surgical target selection as well.


img/wayla4-thumb      img/wayla5-thumb

SQUEAKYDOLPHIN

Broad real-time monitoring of online activity from YouTube video views, URL's "liked" on Facebook, Blogspot and Blogger visits, and other social media activity.


img/wayla6-thumb      img/wayla7-thumb      img/wayla8-thumb

TECHNICAL DESCRIPTION

A variety of technologies are employed to create the immersive, data gathering environment that forms the centerpiece of WHATAREYOULOOKINGAT. One of the primary sources of personal information collection is the Facebook API. Using a publicly available URL standard, WHATAREYOULOOKINGAT retrieves a wealth of information uploaded to a person’s Facebook page including their profile picture, birthdate, city of residence, profession, and assorted tastes in music, movies, and books. The text from this rich source of data is presented via the Apple text-to-speech tool, modified for streamlined generation and vocal variation in Max 6.


img/wayla9-thumb      img/wayla10-thumb      img/wayla11-thumb      img/wayla12-thumb      img/wayla13-thumb      img/wayla14-thumb

The visual elements used a custom-designed interactive lighting system. A projector mounted to the lighting grid was reflected 90º down to the floor by a first surface mirror. An Xbox 360 Kinect camera, mounted alongside the projector, was used for its inexpensive infrared camera. To illuminate the darkened performance space, a medium-sized LED infrared lamp was also mounted near the Kinect. The system was routed to Max 6 and made use of Jean-Marc Pelletier's Computer Vision for Jitter and the OpenKinect project's libfreenect library. The culminating three-channel projection again used the Facebook API to retrive publicly accessible images from both audience members and strangers in real-time and display them as an immersive "sea of faces."