margin of error

panasonic27spill.jpg
Why did you do it.
Huh?
Why did you do it New York Times? I was reading your daily news, the stuff that’s fit to print anyway and you turned on a little tv. Right there. Right next to the article about “Taking Spying to Higher Level, Agencies Look for More Ways to Mine Data.” Panasonic’s new ‘book of toughness’ has come out with a tv spot on the internet. Full video quality. Dante’s Inferno in Latin, that’s a toughbook. Computer’s are not books, Panasonic. Stop jump-cutting in the margins of my paper. Well…there’s my problem. A screen is not a newspaper.

week.4

4: Hands

Lighting, Manual Cameras, IR, Polarization, Retroreflective ir flood lights
Grouping Pixels Points, Rectangles

ASSIGNMENT

Make a controller.
Reading: Visual Intelligence, Mind Hack

PRODUCT?

week.3

3:BODY

ASSIGNMENT

Make an electronic glass.
Reading : Mirror Neurons

MATTER

>>>me in your life

For the “electronic-glass” project, I am interested in “augmenting reality” by placing something within the “reflected frame” of an individual. Things are not what they seem. The technical aspect seems straight-forward enough. I plan to install the cam-screen set up in an indoor hallway to control my background as well as to limit my depth noise. Background deletion and edge detection could then be used as a way to paint “pre-recorded” pixels on to a buffered image then display it to the screen. This is where I may have trouble wrapping my head around this thing.

For the sake of discussion let’s say that what I am inserting into the scene is a prerecorded loop of myself (in profile with a cupped hand & whispering to the moving objects (people) in the frame.) This means I have two sources –
1. the live video from an external camera with a the same background as source#.2
2. the pre-recorded loop of myself against the same background as source #.1

It is my instinct to segment out just the “me” pixels from the pre-recorded loop of myself and paint those specific pixels at a certain location on the live “buffered image” based on movement within the frame (i.e. a person).

As a proof of concept, perhaps it would make sense to first “paint” a still image onto the live buffered image. let’s start there.

PRODUCT

Slight trouble with the code amongst other things slowed the completion of this project for a while. I still have a very strong desire to complete the thing.

In the mean-time, I’ve toyed around with another ‘webcam project’ I had in mind – to capture every jump cut on a single television station across a 24 hour period. Here is an excerpt from Fox5 showing just over 600 jump-cuts in just over 30 minutes.

This is simple differencing from one frame to the next: if the percentage of change between two frames is great enough (as in a standard, jarring jump cut) snap a picture.

week.2

2:Architectural

ASSIGNMENT

Extend WebCam class. Make a Camera for taking still photos of a space. What spaces are interesting to capture buildings doorways, skies, rooms, highways, power pants, your neighbors apartments, moutains of afganistan, every possible perspective in the world. How is it triggered, timelapse, sound, movement, physcomp rig, or mouse clicks of unemployed peopler. How are they displayed, a sequence, a blending, acollage of sub images and master images, a panorama, or a cubist assembly of many people’s perspectives of the same thing. Where are they published, back in the space, on the web, on a phone or on the wall.

PRODUCT

It has been my experience that most instances of interactive video and “webcam” projects deal primarily within the realm of the event: capturing an event / an event triggering and event / an event intitiating the action. In todays media-saturated and hyper-everythinged world blah blah blah it is a rare public event when things become still.

For my first webcam project I have decided to explore the notion of the non-event: stagnation.

Extending the WebCam Class, I came up with the StagnantCam. StagnantCam uses methods implemented in MotionDetectorCam and WebCam. The basic logic used: Measure the percentage of change between the previous frame and this frame. If that percentage of change is below a certain threshold start a timer and begin the process again where we grab a new frame of video and the old “this frame” frame becomes the old frame. If the percentage of change is below a certain threshold keep the timer running. If the timer has timed long enough, take a picture and start over. If the timer has not timed long enough, keep the timer running and start over. If the percentage of change is now above the threshold, reset the timer and start over. This pseudo-code assured “stillness” before the taking of a picture as opposed to other incarnations of the code which would merely take stillness across two frames of video (about 1/15th of a second).

In this way, a picture will only be taken during moments of extreme stillness, stagnation. With moderate fine tuning, the code can be adapted to any environment.

Running the code on the floor of ITP, I’ve noticed that the camera doesn’t neccessarily trip only when there are no individuals in the frame. The camera trips most often when the things in the frame are at lease semi-permanent fixtures in that environment. Where as an individual passing down a hallway at a distance of 100 feet will probably not be captured, an individual who passes down the hallway, stops to get a drink at the drinking fountain, and moves on probably will be captured.

Other ideas for multiple iterations of this project include an interface which fades slowly between the display of the second to last captured frame and the display of the last captured frame.

A link to the StagnantCam code can also be found here.

week.1

syllabus

1: Hello Class

Assignment:

* Hello Java: main
* Hello Eclipse: new project, new class, run
* Hello CVS : share> new repository, update committ
* Extra Credit HelloProcessing, HelloWindow
* Find Example
* Reading: Golan, Head First Java p 1-150

Product:

The transition into the Eclipse environment was a smooth one perhaps only because I never picked up any steam as an even semi-proficient programmer in the Processing environment. Great for me…clean slate.

I won’t bore you here with the details of a “Hello World” in a new environment. I’ll save the “boring you” for the exciting stuff.

In regard to the Golan reading and starting to think about the webcam assignment I find Suicide Box by the Bureau of Inverse Technology (Natalie Jeremijenko and Kate Rich) to be very intriguing for various reasons. The actual content of the piece is quite interesting, as well as the technical aspects I can imagine were used to implement the project. I think I may use similar methods for a mouth tracking project which could turn into the webcam project of next week.

REHEARSAL | 02

IMG_0251.JPG

TASK:
.Block the show

PRODUCT:
.Working in Allan’s apartment has already proven troublesome. The dimensions of the Live Bait Theater are significantly bigger than that of the interior of this West Lakeview railroad. This is not an insignificant comparison as the groundplan for Live Bait does not exist outside of BIG PICTURE GROUP’s Master Electrician, Margaret Hartman’s own drafting. I personally visited the space for the first time today and documented as much as I could, to send to Christine Shallenberg, dependent study‘s lighting designer, as she has not yet flown in from New York.

Although it is hard to get a sense of space, everyone seems more than properly focused and prepared. Allan has impressed me with his preparation. We made a lot of progress getting him out of his head today. Getting someone out of someone else’s head definitely helps me to get myself out of my own head. Sally continues to impress with her natural ability and connectivity.

IMG_0254.JPG
We blocked the show, with few choreographic and movement based exceptions. Much of the stagnant “stilyzed” blocking is in transition, or flat out has been changed. I am not sure if this is a product of doing what is right, or simply being a bit bored with the blocking from the previous two incarnations of this particular show. (dependent study was originally presented as part of the Phoenix Season during my undergraduate years at Illinois Wesleyan University / subsequent productions include a run at P.S. 122 during the New York City International Fringe Festival.) “DISSECTION” has also been re-introduced in this production after being cut during the Fringe run.

REHEARSAL | 01

DSrehearsal11.JPG
THE TASK :
.Introduction
.Table Work
.Read through
.Ensemble Building

PRODUCT :
.A first rehearsal is always both nerrrrvwracking and fun. I feel more of an obligation to make things right since I wrote the show. There is no “done” with your own work. No one’s published it. It’s not another author’s work who has already gotten it right. It is us. It is me. Mine. It’s twice the work. Figuring out what it all means in the retrospect. It also happens to be one of the most rewarding things I know of in life.

here we go.

sub_banner.gif

BIG PICTURE GROUP‘S second show, dependent study, opens on January 6th, 2005, at the Live Bait Theater in Chicago, IL. I’ve decided to keep online documentation of the process. Throughout the next several weeks, I will organize, document, and post the process as I see it relevent to the process itself to activley keep a log of discoveries, thoughts, doubts, and various works in progress.

WHETHER GETS PRESS !!!

HELL YES!

hellyes.jpg

The latest late-breaking news, (to me anyway) ROCKETBOOM!

rocketboom press.jpg

Mom will be proud:

whethermanFHMthumb.jpg

DAILY OBSERVER header.jpg

whether man OBSERVER.jpg

This is all well and good. But how long can I keep up the act? How can I compete with the likes of Cecily Tynan, (compare) A redheaded beauty who delivers the news with compassion and charisma to Philadelphia-area audiences; Cecily Tynan is one of the brightest and most beautiful anchors in the business, who, I have been told, has herself been told on December 11th of this year to “suck it” by the likes of the proud blogger known only as the “bella vista social club?

Another link from a guy who’s latest blog includes the phrase:
I am now selling some back issues of Sassy magazine (for a friend, I swear!)” Thank you MAGNETBOX, for the link.

FINAL : ICM

The Processing Side of Things

I have decided to pair my PCOMP final with the final for Intro to Computation Media. The VCR project fulfills the requirements of both final projects. ICM’s main requirement is to use Processing.

The Code has been giving me some trouble as ITP is my first experience with a programming language of anykind. The trick with most eye tracking as I understand it is tracking both the pupil and the glint – the bright reflection caused by shining an infrared or other some-such light into the pupil. Jason Babcock is an ITP alumnus who wrote an eye tracking application for OS X for his final. His site has been helpful. For my purposes, fortunately, I only need three differentiable areas to correspond with the three controls of the VCR: “fast-forward,” “rewind,” and “play.” View the code here.

autoplay="false">