week.3

3:BODY

ASSIGNMENT

Make an electronic glass.
Reading : Mirror Neurons

MATTER

>>>me in your life

For the “electronic-glass” project, I am interested in “augmenting reality” by placing something within the “reflected frame” of an individual. Things are not what they seem. The technical aspect seems straight-forward enough. I plan to install the cam-screen set up in an indoor hallway to control my background as well as to limit my depth noise. Background deletion and edge detection could then be used as a way to paint “pre-recorded” pixels on to a buffered image then display it to the screen. This is where I may have trouble wrapping my head around this thing.

For the sake of discussion let’s say that what I am inserting into the scene is a prerecorded loop of myself (in profile with a cupped hand & whispering to the moving objects (people) in the frame.) This means I have two sources –
1. the live video from an external camera with a the same background as source#.2
2. the pre-recorded loop of myself against the same background as source #.1

It is my instinct to segment out just the “me” pixels from the pre-recorded loop of myself and paint those specific pixels at a certain location on the live “buffered image” based on movement within the frame (i.e. a person).

As a proof of concept, perhaps it would make sense to first “paint” a still image onto the live buffered image. let’s start there.

PRODUCT

Slight trouble with the code amongst other things slowed the completion of this project for a while. I still have a very strong desire to complete the thing.

In the mean-time, I’ve toyed around with another ‘webcam project’ I had in mind – to capture every jump cut on a single television station across a 24 hour period. Here is an excerpt from Fox5 showing just over 600 jump-cuts in just over 30 minutes.

This is simple differencing from one frame to the next: if the percentage of change between two frames is great enough (as in a standard, jarring jump cut) snap a picture.