k.log

memoirstiny.jpg

Tonite marks the first post to something called the “k.log“. It is a project I have wanted to implement for some time now. It will reinforce daily writing routines as well.

what is it?

a notebook, scanned, and posted on the internet.

enjoy!

week.5/6

Tracking Blobs|MIDTERM WORKSHOP

Workshop Midterm
Noise
Blobs
Assignment: Make a controller.

ASSIGNMENT
Make a midterm.

PRODUCT
>>>>>talkiecam
way down in the code: I’ve been working with various instances of this code that I first pulled from Chris Adamson’s QuickTime for Java: A Developer’s Notebook. In starting to modify the code, I quicky realized what a blessing Dano’s vxp has been. I am now in the land of a different sort of QuickTime. I am at a point where I need to start from scratch with the Adamson book. Something is not working getting the code to check the sound levels to control the start and stop of capture. It seems like a relatively simple thing. I’ve been working on it now for three weeks.
eclipse.jpg

margin of error

panasonic27spill.jpg
Why did you do it.
Huh?
Why did you do it New York Times? I was reading your daily news, the stuff that’s fit to print anyway and you turned on a little tv. Right there. Right next to the article about “Taking Spying to Higher Level, Agencies Look for More Ways to Mine Data.” Panasonic’s new ‘book of toughness’ has come out with a tv spot on the internet. Full video quality. Dante’s Inferno in Latin, that’s a toughbook. Computer’s are not books, Panasonic. Stop jump-cutting in the margins of my paper. Well…there’s my problem. A screen is not a newspaper.

week.4

4: Hands

Lighting, Manual Cameras, IR, Polarization, Retroreflective ir flood lights
Grouping Pixels Points, Rectangles

ASSIGNMENT

Make a controller.
Reading: Visual Intelligence, Mind Hack

PRODUCT?

week.3

3:BODY

ASSIGNMENT

Make an electronic glass.
Reading : Mirror Neurons

MATTER

>>>me in your life

For the “electronic-glass” project, I am interested in “augmenting reality” by placing something within the “reflected frame” of an individual. Things are not what they seem. The technical aspect seems straight-forward enough. I plan to install the cam-screen set up in an indoor hallway to control my background as well as to limit my depth noise. Background deletion and edge detection could then be used as a way to paint “pre-recorded” pixels on to a buffered image then display it to the screen. This is where I may have trouble wrapping my head around this thing.

For the sake of discussion let’s say that what I am inserting into the scene is a prerecorded loop of myself (in profile with a cupped hand & whispering to the moving objects (people) in the frame.) This means I have two sources –
1. the live video from an external camera with a the same background as source#.2
2. the pre-recorded loop of myself against the same background as source #.1

It is my instinct to segment out just the “me” pixels from the pre-recorded loop of myself and paint those specific pixels at a certain location on the live “buffered image” based on movement within the frame (i.e. a person).

As a proof of concept, perhaps it would make sense to first “paint” a still image onto the live buffered image. let’s start there.

PRODUCT

Slight trouble with the code amongst other things slowed the completion of this project for a while. I still have a very strong desire to complete the thing.

In the mean-time, I’ve toyed around with another ‘webcam project’ I had in mind – to capture every jump cut on a single television station across a 24 hour period. Here is an excerpt from Fox5 showing just over 600 jump-cuts in just over 30 minutes.

This is simple differencing from one frame to the next: if the percentage of change between two frames is great enough (as in a standard, jarring jump cut) snap a picture.

week.2

2:Architectural

ASSIGNMENT

Extend WebCam class. Make a Camera for taking still photos of a space. What spaces are interesting to capture buildings doorways, skies, rooms, highways, power pants, your neighbors apartments, moutains of afganistan, every possible perspective in the world. How is it triggered, timelapse, sound, movement, physcomp rig, or mouse clicks of unemployed peopler. How are they displayed, a sequence, a blending, acollage of sub images and master images, a panorama, or a cubist assembly of many people’s perspectives of the same thing. Where are they published, back in the space, on the web, on a phone or on the wall.

PRODUCT

It has been my experience that most instances of interactive video and “webcam” projects deal primarily within the realm of the event: capturing an event / an event triggering and event / an event intitiating the action. In todays media-saturated and hyper-everythinged world blah blah blah it is a rare public event when things become still.

For my first webcam project I have decided to explore the notion of the non-event: stagnation.

Extending the WebCam Class, I came up with the StagnantCam. StagnantCam uses methods implemented in MotionDetectorCam and WebCam. The basic logic used: Measure the percentage of change between the previous frame and this frame. If that percentage of change is below a certain threshold start a timer and begin the process again where we grab a new frame of video and the old “this frame” frame becomes the old frame. If the percentage of change is below a certain threshold keep the timer running. If the timer has timed long enough, take a picture and start over. If the timer has not timed long enough, keep the timer running and start over. If the percentage of change is now above the threshold, reset the timer and start over. This pseudo-code assured “stillness” before the taking of a picture as opposed to other incarnations of the code which would merely take stillness across two frames of video (about 1/15th of a second).

In this way, a picture will only be taken during moments of extreme stillness, stagnation. With moderate fine tuning, the code can be adapted to any environment.

Running the code on the floor of ITP, I’ve noticed that the camera doesn’t neccessarily trip only when there are no individuals in the frame. The camera trips most often when the things in the frame are at lease semi-permanent fixtures in that environment. Where as an individual passing down a hallway at a distance of 100 feet will probably not be captured, an individual who passes down the hallway, stops to get a drink at the drinking fountain, and moves on probably will be captured.

Other ideas for multiple iterations of this project include an interface which fades slowly between the display of the second to last captured frame and the display of the last captured frame.

A link to the StagnantCam code can also be found here.