Thursday, October 18, 2012

Transparent screen

Transparent trick was popular for some time for it's see thought effect of a non-transparent object. Normally this is done by multiple shots with tripod. Similar concept appears in movie too.
I'm thinking doing it in real time, using a kinect to capture both 3d model and texture. Then I can move the view point backward in virtual word, render that on screen and merge the scenes seamlessly. Also object between screen and kinect will be invisible.
The concept is there but some technical limitation restricts the performance. First, the perspective of kinect is too wide, making the needed part in very low resolution. Also a single kinect can not capture all information we need to render a vivid image. Maybe multiple cameras will be needed.





Friday, October 12, 2012

mannequin projection

little interactive projection mapping exercise

this is just a little prototype..the idea is that once you stick your head into the box, you experience going down and forward as the rings move up... video is a little confusing as my camera framerate too low..it captures the projector frequency...you are only supposed to see the thin pink stripes

Projection mapping test on a modular unit


projection mapping tests

first i did a madmapper test on a door

projection mapped door from r k schlossberg on Vimeo.

then i tried to do something which would emphasize the 3rd dimension.. projection-mapped paper crane

"enough"; a study of a physical form and overwhelming media information


enough: a study of an iconic physical form and media information from Jee Won Kim on Vimeo.




"How is our experience of a spatial form is affected when the form is filled in with dynamic and rich multimedia information? (The examples of such environments are particular urban spaces such as shopping and entertainment areas of Tokyo, Hong Kong, and Seoul where the walls of the buildings are completely covered with electronic screens and signs; convention and trade shows halls; department stores, etc,; and at the same time, any human-constructed space where the subject can access various information wirelessly on her cell phone, PDA, or laptop.) Does the form become irrelevant, being reduced to functional and ultimately invisible support for information flows? Or do we end up with a new experience in which the spatial and information layers are equally important? In this case, do these layers add up to a single phenomenological gestalt or are they processed as separate layers?" from 'The Poetics of Augmented Space' by Lev Manovich.


[original video footage]

 
hwa2 from Jee Won Kim on Vimeo.

Drumset Mapping




Still have a few timing issues to work out, but this is what I've got so far.

Thursday, October 11, 2012

The World's Tiniest Projection Map!

I decided to projection map onto my 3D object from last week, and this is what I got.

Perspective Transform for projection

Perspective Transform is a transform that maps one arbitrary 2D quadrilateral into another. I used this method last semester for Plinko Poetry project because the screen I need to track is in trapezoid shape. With this method, I managed to map it back to rectangle rather than moving camera from top to front which may block users.
In projection, the problem is similar, what we have on screen is different from the projected surface as long as the projector is not perpendicular to projected surface. So I tried to implement that algorithm in Processing to make projection easier without MadMapper or something like that.
The algorithm I used is exactingly the same to the one on http://xenia.media.mit.edu/~cwren/interpolator/. The transform method is not a simple multiply of a matrix. so I write my own line function to do transform before line is actually executed. Maybe it can be implemented in a deeper level to make it easier to use.
So. I have this on my screen. And it looks not so good on ducts.




















But I can drag the corners to remap it.











Also on side of duct.


Monday, October 8, 2012

Week 5

This week we started with a discussion of two 3d capture tools:

  • 123d catch can reconstruct a mesh based on a collection of 20-40 photos of a scene. You can download the .obj file from their website after the data has been processed. 123d catch can be used at all scales, limited by your camera rather than the scene.
  • reconstructme uses your kinect to create high-accuracy meshes (higher accuracy than any single kinect scan). reconstructme is best for human-scale objects and indoor scenes.

And we moved on to exploring projection mapping. There are two major paradigms:

  • Illusion-based mapping: where you try to create the appearance of a false geometry to a scene, for example by providing a "window" into a space, extruding or indenting features from a surface, creating false drop shadows, etc. The 555 Kubik facade is a clear example of this technique. Illusionistic mapping is incredibly popular, but doesn't translate to real life as well as it translates to the single perspective of web-based video.
  • Augmentation-based mapping, which has been around since at least 2006/2007 with Pablo Valbuena's "Augmented Sculpture" series. This technique does not create false geometry, just false lighting effects. Shadows and reflections are generated only as if the surface was responding to virtual light sources. Colors are used to "paint" the surface rather than for the sort of trompe-l'œil of the illusionistic approach.

One of the earliest examples of projection mapping is more illustionistic, without being as cliche as most projection mapping today: Michael Naimark's "Displacements" from the early 80s was based on shooting video in a room with actors, painting the entire room white, then reprojecting the footage.

There are a number of tools available for projection mapping. Here are a few:

  • vvvv is Windows-only but used by visualists around the world for creating massively multi-projection live visuals using a patch-based development interface. The strength of vvvv for projection mapping lies in its preference for 3d visuals, and in real time feedback while prototyping.
  • madmapper is not meant for generating content, but for mapping pre-rendered content or streaming real time content via Syphon. madmapper provides an interface for selecting masks, duplicating video sources across surfaces and projectors, and warping projections to match nonplanar surfaces.
  • little projection mapping tool shares a similar spirit to madmapper, but is built with openFrameworks and the source code is available for learning or hacking.
  • mapamok uses a different paradigm, oriented towards separating calibration from content creation. mapamok loads a 3d model, and allows realtime editing of a shader to determine the look and feel of the projected visuals. Calibration is handled via a quick alignment process that requires selecting 8-12 corresponding points.

The assignment this week is simply: create and document a compelling projection mapping project. You may work with the tools we discussed in class (123d catch, reconstructme, madmapper, mapamok) or build your own. Try to break out of the paradigm of using a projector for creating a "screen". Instead of projecting onto a 2d surface of a 3d object, try projecting across an entire 3d scene, or covering an entire 3d object. Think about whether you want to make something more "illusion" oriented, or "augmentation" oriented: what aesthetic are you more interested in? Consider the difference between fabricating an object specifically for projection mapping, versus scanning/measuring an object that already exists. Think about what an interactive version of your system would look like.

At the beginning of next week's class everyone will briefly present documentation from their projection mapping project.

Setting up reconstructme

First install "OpenNI-win32*.msi" then "SensorKinect-wind32*.msi" then "ReconstructMe_Installer*" and, finally, get the OpenCL.dll file if you see an error when trying to run reconstructme.