Monday, February 28, 2011

Buncha Kinect projects from CMU

Interactive Art & Computational Design class at CMU has a bunch of interesting kinect projects including some projection mapping stuff, some skeleton-tracking puppeting, etc. I especially like the Magrathea dynamic landscape creation one:

Magrathea - Dynamic Landscape Generation with Kinect from Timothy Sherman on Vimeo.

Saturday, February 26, 2011

Avatar Kinect 3D scan and photo map

Microsoft demos a process in future Kinect games of taking a 3D mesh from the kinect and mapping a flat photo over it to create a talking photorealistic avatar:



I'd love to get my hands on that software that's translating the text into facial muscle movements...

Microsoft Research

http://www.youtube.com/watch?v=uLcE0qlWMkQ&feature=player_embedded#at=64

Thursday, February 24, 2011

Sound Spaces

Sound Spaces by Ivana & Diana

This week, the biggest hurdle we had to work through was deciding what kind of interaction we wanted to bring to the space. First, we played with the idea of changing surroundings through adjusting the bounding box (and then later, other shapes) with our interaction. After working through this we decided it wasn't quite right and we wanted to change the space as we were physically moving through this. We liked the idea of moving shapes and their surroundings and played a bit with spherical surfaces that were able to move with parts of the body but again, not exactly what we wanted to do. Finally, we agreed upon creating an interaction in the 3D space to control sounds. Using .wav files of various frequencies, we created boxes that turn the sounds on and off. When physical presence is detected in the space of the boxes, the boxes turn from transparent to opaque and a sound is created. There is a bit of sound interference from the very beginning, giving it kind of a dead radio space feel to it. We were delighted by this cool, unexpected effect -- You can hear and see below:

VIMEO VIDEO.


Video Mirror with Space Portal Ripple

First draft of the concept as seen in every other sci-fi movie. Calibration needs work.

Using the Kinect to Track Involuntary Movement for Psychiatric Testing

This past weekend I participated in the Health 2.0 Boston hack-a-thon. I worked with a group that included statisticians and a psychiatrist. We used skeleton tracking with the Kinect to automate a test for involuntary motion that psychiatrists use to track the condition of patients with neuromuscular disorders (frequently caused as a side effect of psychiatric drugs).

The app tracks the linear motion in 3-dimensions of the hands and knees of a patient who's instructed to sit completely still. The total amount of motion is compared against a pre-set quantity to determine red, yellow, or green score. The scale was calibrated in advance by the psychiatrist demonstrating normal and abnormal amounts of motion.



Obviously this is a one-day prototype, but the psychiatrist was excited enough about it that he wants to get it approved for use in his clinic after a few iterations.

My team ended up winning the event and will be continuing to develop the app and presenting it in San Diego at the national Health 2.0 conference where there's some kind of big prize if we win.

Here's my full blog post about the event.

General Tau theory




For homework this week, I attempted to build an algorithm which implements Dr. David Lee's General Tau theory, as described in a recent paper, "How Movement is Guided".

A few relevant paraphrased quotes:
Principles of Animal Movement:
1) Movement requires prospective control.
2) The perceptual information guiding movement must extrapolate the movement into the future and must be readily available.
3) Movement requires constant intrinsic-cum-perceptual guidance. Intrinsic guidance is necessary because animals have to fashion movements to their purpose.
4) Movement guidance must be simple and reliable.
5) There are simple universal principles of movement guidance in animals.

Rather than use multiple information about the size, velocity, and deceleration of the motion-gap, we simply use the tau of the motion-gap. Tau is a measure of how a motion-gap is changing. It is the time-to-closure of the motion-gap at the current rate of closure, or equivalently, the first-order time to closure of the motion-gap.

Tau was first formulated in an optic variable that specifies time-to-collision if the closing velocity is maintained. Note that tau is not in general the actual time-to-closure of a motion-gap, because the velocity of a closure of a motion-gap may not be constant. The Tau of a motion gap is numerically equal to the ratio of the current size, x, of the motion-gap to its current rate of closure, i.e. T(x) = x / x'.


If the tau of two motion gaps remain in constant ratio to each other, they are said to be tau-coupled, and this is basically how we use perception in action to move about a space. For example, a bat landing on a perch needs to control simultaneously the closure of two extrinsic motion-gaps: the distance motion-gap, X, between itself and the perch and the angular motion-gap, A, between the direction line to the perch and the direction that the line should assume during the final approach to the perch. Bats tau-couple A and X (T(A) = kT(X) for a constant K throughout the maneuver.

Raising food to the mouth is tau-coupled, as is the motion-gap between the hand and the bat with the motion-gap between the ball and the bat in baseball.

Motion-gaps are not necessarily movement of objects. They can also be the change in other dimensions. Tau coupling may work in the following power law relationships:
Guide by sound including dolphins and bats.
Guide by smell including microbes.
Guide by infra-red radiation in rattlesnakes.
Guide by electrical fields in fish, sharks, platipi, and bacteria.

Tau-coupling has been studied in trombone playing, where the movements of the trombone slide, the lips, and the resulting acoustic pitch-slide are tau-coupled with a similar K value. It has also been tested in the neurological activity of monkeys:

"The hypothesis was tested by analyzing the neural power data collected from monkey motor cortex and parietal cortex area 5 during a reaching experiment. In each cortex a neural power motion-gap was found whose "tau melody" ( the temporal pattern of tau ) was proportional to the tauG melody and to the tau melody of the motion gap between the monkey's hand and the target as it reached. In the motor cortex, the neural tau melody preceded the hand movement tau melody by about 40 ms, indicating that it was prescribing the movement. In the parietal cortex area 5, the neural tau melody followed the movement tau melody by about 95ms, indicating that it was monitoring the movement."

In short, there's a lot of evidence which suggests that tau is a measure by which we understand intrinsic and extrinsic motion, and the constant between tau-coupled motion should be useful for creating gestures between joints.

I've started to code a framework in which I calculate the relative position of the joints from the torso in spherical coordinates and use those to obtain a tau for the radius, theta, and phi between all of the joints in relation to each other. Using the OpenNI skeleton, there are something like 1,307,674,368,000 (15!) possible generic combinations of movements to be made if you were ONLY using distance and not the angles, give or take ( somewhat less because you can't have a gesture with a limb in reference to itself, I think, but also quite more because that's not saying what *direction* they're moving in, or even thinking about tau-coupling distance with one or both of the angles, which basically puts you into an infinite number ). Most of those gestures are nonsense.

Anyway, if the ratio between two limbs' closure of a motion-gap remains at a constant ratio K, and that K is above a certain threshold, AND (I've decided) both of the limbs are moving, then that is a valid gesture, and it can be cataloged. Figuring out the thresholds is a bit tricky, bit it's also more tricky to figure out what the closure of the motion-gap is when you don't know the end point. I tried to write some code that used the distance and angles between the joints and the torso to define the motion gap, but found that small movements create huge taus (given that I'm calculating the first order derivative as the difference between last frame's position and this one - which might very well be wrong ).

I think I'm going to go back and rework the solution to be defined by the *opening* of a motion gap, since I have historical data. In this way, as long as a K between joints is constant, the gesture is still in motion. When the K changes and stabilizes, a new gesture is being signified. Unfortunately, I've only worked on this problem today, so I'm not quite sure what the results of that will be.

Wednesday, February 23, 2011

Homework

When distance between blue box's and sphere's centroids becomes critical, stuff happens.

Sphere's position is based on distance to Kinect (closest).

I guess I somewhat cheated.

Kinekt Kultism

Sunday, February 20, 2011

Taking Control Over My Translations: Basic 3D Data Tracking

[2/21 UPDATED: ADDED ANOTHER VIDEO BELOW

(Moderate) Success! This is a video of the first time I was able to get the kinect data to track my hand in all three axes. Next phase will be to employ some gesture tracking and recognition, and possibly, optical flow. Stay tuned.


Taking Control Over My Translations: Basic 3D Tracking Demo from Kevin Bleich on Vimeo.



UPDATE:
Another quick test of 3D tracking in a specified region of imaginary 3D space. Nothing too special happening here other than that. Next step is to get some velocity sensing on these regions to trigger something like drum samples. Invisible Drum Kit Here I Come!!

Punching Box: Not So Basic 3D Tracking from Kevin Bleich on Vimeo.

Thursday, February 17, 2011

Finding the OpenCV contour of a virtual camera projection of a point cloud

As we talked about in class today, here's some video of converting the 2D projection of the point cloud created by the camera into an image and then handing it off to OpenCV to do contour finding and bounding box stuff with. Top left is raw depth, bottom left is 3d point cloud with bounding box and orientation line, top right is contour and bounding box of 2D projection.

Destructive Scanning

DIY 3d scan (destructive)

Following a short brainstorming session, our group (Shahar & Molly) converged on the idea of destructive scanning (what does that say about us?). We liked the idea of consuming the object as it's getting scanned and "rebuilt" in 3D in the computer.
We immediately thought the inkscanner was a good place to start, and after the "dissolve the object using acid" idea came off the table, we settled for taking the slices concept literally, and physically slicing the object that we would scan. Molly brought some fancy marzipans, I got a knife, and we got to work.
The process was pretty straightforward (or so we imagined it to be):
  1. Slice an object
  2. Take some photos
  3. Run the photos through the Fluid Scanner
The first problem we ran into was that it was hard to cut the marzipan into thin enough slices to get a good resolution on that axis. That might be remedied by simply choosing more easily slicable objects. The other problem was that the Fluid Scanner didn't really work, and the source code did not compile either (used an older OF version). We struggled with it for a while before deciding to try something else. Molly went for AfterEffects, while I tried to write some Processing code to replicate the desired effect.
Here's the code we ended up using.






carrot from Molly Schwartz on Vimeo.


carrotae from Molly Schwartz on Vimeo.

Addition to the snowman

Structured light experiments
by Diana Huang, Ivana Basic, Eszter Ozsvald, Nikolas Psaroudakis and Yang Liu

While we were playing with snow, we also dabbled with experimenting with the structured light example that Kyle provided for us. We thought this would be simple to try and implement however we experimented a few times before we found the first setup to create a structured light scan that we were satisfied with-- Following positioning instructions is key. After viewing this video here (http://vimeo.com/13100293) we thought we could try positioning the camera and projector together directly above the object we wanted to scan. These scans didn't come up with as much information as we wanted for our scans. We found that positioning the camera and the projector at angles - tangential to the surface of what we wanted to scan, created the best scans. (We should have known this in the first place, per the Instructables, however we wanted to test some other configurations out. ) Also, we found that instead of a white backdrop, a black backdrop with lighter clothing created the best scanning environments. Below is an interpretation using a few of our scans during our "experimentation" period. Some of the textures created are very visually interesting. Featuring Ivana Basic and unknown ITP mask. Music by Rastko.


3D, OpenCV, and Me - Homework#2

3D, openCV, and me - 3Dsav#2 from Kevin Bleich on Vimeo.



This assignment will go down in infamy. I am pleased with my results, but there is still so so so so much more to explore with all this data.

So far, I have been able to calculate and draw the bounding box, and centroids. I next experimented with some blob detection and contour finding. Once I was able to get something cogent together I thought an easy way to track small blobs in 3D would be to change the threshold amount on the contour search along with my movement BACK and FORTH in the z-axis. Here I am only calculating the brightest bits so, for example, if I were tracking fingers on both hands as soon as one hand is farther back than the other, the dual track is lost. Must work on this.

For some quick interaction I decided I would use the blobs to control the pan around the 3D point cloud, which is what you are seeing here. Apologies for the jumpy graphics, I think I was getting some noise in the Kinect because it was being a lot more smooth than that.

In my research of openCV and motion tracking, I realized I knew zilch about coding for gesture recognition. I think I am ready for that challenge. I think I'll be fine though since I really didn't know how to do what I did in the video a week ago.

At the top of my list for questions tomorrow is to ask about great resources for learning how to track and utilize gestures, as well as learn more about how I might be able to track multiple discrete objects in 3D.

I have a few ideas for visualization as well, but I am getting way ahead of myself. 3D baby steps.

Kevin

Wednesday, February 16, 2011

Homework #2 - High Five



My first experiments with the Kinect.

Calculated the bounding box and centroid. When you make a high five motion, it scrubs though a video of me giving you a high five.

Bounding Box homework

Mike Knuepfel and I worked on our homework together...I'll do a blog post soon explaining how we created the bounding boxes, the centroid and the center of the bounding box, but in the meantime, here's a video: http://vimeo.com/nisma/3dsav-hw2. I added opticalFlow files and made the rotations dependent on the movement of the subject.

Scan a fruit basket with milk?



Yes

1st Week Assignmnent "Snowman Scanning"

Skeleton Tracking with OSCeleton

Got up and running with OSCeleton and Processing. Using the joint location data to draw a stickman in 3-space and then using the relationship between the various joints to control the movement of a camera around the model: distance between hands controls zoom, the camera follows the right hand, putting both hands above the head rotates the camera, and putting both hands bellow the hips rotates the camera the opposite way:



Full write-up here: Skeleton Tracking with Kinect and Processing.

Sunday, February 13, 2011

3d scanning at union square station

we took the kinect to Union Square for candid shots mixing HD SLR and the depth image in a custom openframeworks application


Thursday, February 10, 2011

Week 2: Homework

This week's homework consists of four separate components.
  1. Find the bounding box center and centroid for 3d data. Visualize their relationship.
  2. Pick a computer vision technique, and extend it to 3d. For example, apply a blur or morphological filter to the 3d data and think about how it is affecting the data. For advanced students, consider writing an optical flow algorithm, extending contour detection/connected components to 3d, or using the depth image to inform a face tracking algorithm.
  3. Track a single gesture in 3d. This may use the results of steps 1 and 2. For example, using the result of step 1 to recognize a "superman" gesture.
  4. Post a summary of the information from last week's project on the class blog. You may host it elsewhere, but you need to at least provide a link and one picture here. The description should provide enough information that with some ingenuity could recreate it, but it doesn't have to be written as a tutorial.
A basic project for working with 3d data is available on the class github. For instructions on how to use the code, see the readme on that page.

Week 2: Computer Vision and 3d

Resources

Super long, in depth, free book on computer vision by a researcher at Microsoft that worked on Photosynth. Computer Vision: Algorithms and Applications

Learning OpenCV is one of the best resources for learning about computer vision in a practical, application-oriented way.

Another really good way to learn OpenCV is just reading through the tutorials and documentation on the OpenCV website.

The Pocket Handbook of Image Processing Algorithms in C is great for little tips and descriptions of computer vision and image processing algorithms, though it's a little buggy sometimes.

Computer Vision Test Videos
from Theo and other contributors. There are lots of other websites with test videos for computer vision, for a variety of applications/domains.

References

Kumi Yamashita is a NYC-based artist that has worked with 3d volumes projected as 2d shadows. Also see work from Larry Kagan.

2d shadows/forms can also be projected into 3d spaces. See work from Justin Manor and Tamas Waliczky.

Because 3d is still relatively new, it can be helpful to think about how the visual aesthetic is related to older forms. Consider the work of Sophie Kahn compared to Norman McLaren's Pas de Deux. Or compare the recent Moullinex video to sculpture by Antony Gormley.

Sound Scan: 3D scanner using falling BBs and sound

Rough, DIY 3D scan using a grid of holes, falling BBs, a photo interrupter, and a microphone (Greg, Eric, Jeff, Zeven, and Molmol):



Full write-up here: Sound Scan.

Cool 3d Sensing Link - Aligator Embryo


This is a cool video showing a developing aligator embryo.  The scanner can not only sense the outer skin of the aligator, but also internal structures like the skeleton.

Homework #1

Class Syllabus

Class Description

This course will explore recent developments in 3d scanning technology and the tools and techniques for collecting, analyzing, and visualizing 3d data. Once relegated to the realm of academic and military research, 3d scanning has recently been made available to amateurs through DIY implementations like DAVID laser scanner, or, in the case of Kinect, through open source reverse engineering of cheap consumer hardware. We will cover different methods of 3d input, including structured light, LIDAR, time of flight, stereo matching, and optical triangulation -- and focus on techniques for organizing and collecting data, creatively visualizing it, and using it in an interactive context. This course will be taught using openFrameworks, a C++ toolkit for creative coding. While the class will be highly technical and code-heavy, there will be a strong emphasis the poetic potential of this new form of input. This two-point course meets for the first seven weeks of the semester.

Schedule

Week 1 (February 3rd)
Introduction to 3d scanning technologies, including LIDAR, structured light and Kinect, stereo and multiview stereo, optical triangulation, and others. The assignment will focus on creating a 3d scan in a group.

Week 2 (February 10th)
Groups present their 3d scans from last week. Continue discussion on scanning techniques. Start working with scan data for interaction, start exploring 3d for interaction. The assignment will focus on 3d as an input for interaction.

Week 3 (February 17th)
Short presentations of work from previous week on interaction. Continued discussion of computer vision for interaction in 3d, handling and processing 3d information.

Week 4 (February 24th)
Start discussing methods of processing scan data for visualization/rendering, including voxels, point clouds, and depth maps. This will lead into a discussion of processing for fabrication on laser cutters, 3d printers, and other devices including non-computational systems. The assignment will focus on recreating specific looks, and producing your own look on a screen or as a fabricated model.

Week 5 (March 3rd)
Note: Kyle will be absent for this class.
Presentations of models from previous week. Discussion of projection mapping and augmented reality systems that take advantage of information from 3d scanning. The class will conclude with a discussion on potential final project ideas. Students are expected to begin working on their final project at this point.

Week 6 (March 10th)
Presentation of intermediate work on final project, followed by discussion and problem solving. This class will primarily be guided by the subjects and problems students encounter while working on their final projects.

Week 7 (March 17th)
Presentation and discussion of final projects.

Assignments and Grading

Assignments will be given at the end every class. Some assignments will require students to post to the class blog http://3dsav.blogspot.com/ which all students will be given an invitation to join. Besides the weekly assignments, there will also be a final project due at the end of the class.

In order to pass the class, students must complete the assignments, the final project, and attend class. A student will fail if they miss more than one class, miss more than one assignment, or fail to present a completed final project.

Resources

Notes from each class will be posted by the instructors to the blog. The syllabus can be found in its most up-to-date form on the class blog at this link.

Wednesday, February 9, 2011

Robert Lazzarini

In thinking about using/visualizing 3d information:

Robert Lazzarini warps familiar objects and then re-casts them in the original materials.  The skulls are cast in ground bone, so they are pale and matte.  Visually they are difficult to comprehend.  You want to physically hold them in your hands to understand them, because they look flat against the wall instead of actually three dimensional. 

http://www.robertlazzarini.com/

NASA's Twin Stereo Probes



"This is a big moment in solar physics," says Vourlidas. "STEREO has revealed the sun as it really is--a sphere of hot plasma and intricately woven magnetic fields."

Each STEREO probe photographs half of the star and beams the images to Earth. Researchers combine the two views to create a sphere. These aren't just regular pictures, however. STEREO's telescopes are tuned to four wavelengths of extreme ultraviolet radiation selected to trace key aspects of solar activity such as flares, tsunamis and magnetic filaments. Nothing escapes their attention.



- more via NASA

Using a Kinect to control your Second Life avatar

Thai Phan, an engineer at the MxR Lab at the USC Insititute for Creative Technologies, recently figured out a way to integrate the use of the OpenNI toolkit with Second Life. It's not quite as smooth as I was hoping when I first saw it, but it's definitely some good progress that I'm looking forward to try out. At the moment, it only allows for gestural input from the Kinect to be assigned to predetermined gestures in Second Life. Regardless, it's a good step forward in exploring the sense of agency that can come from a more fluid type of control.

The code is available at the link above and you can check out the video here:

Converting 2-D photo into 3-D face for security applications and forensics

It is possible to construct a three-dimensional, 3D, face from flat 2D images, according to research published in the International Journal of Biometrics this month. The discovery could be used for biometrics in security applications or in forensic investigations. - PhysOrg

Sound pretty awesome, but I couldn't access the journal to see what it actually looks like.

In other news, NASA's STEREO probe enables 3D view of the sun.

Slit-scan virtual camera

By taking a video with camera motion and then recombining various frame slices into a single image, we can simulate various camera positions and orientations that were not captured in the initial video.

Shmuel Peleg et al's OmniStereo: 3D Panorama is a good example. Much more information including other applications on this type of technique can be gleaned from Peleg's hour-long Google Tech Talk.

More information about slit scanning and examples of artwork can be found on Golan Levin's informal catalogue of slit-scan video artworks and research.

3d/4d ultrasound scans


I don't know if this really responds to the topic but it is by far the most interesting thing I've found when it comes to 3d scans......it's quite amazing....and the aesthetics also..no dots involved here.....another surprising thing for me is a whole video archive on internet, about people who are yet to be born....


















the good part here starts from 1:00...



and on this one from 0:15 further on.......

height estimation with kinect

If you know everyone's height in a space, what would you do with that information?


Maybe you could sort them, project an arrow on the floor telling them where to move to. Or you could pair them up with people of similar heights.








natural fluid scanning

It would be neat to mashup Friedrich's fluid scanning work with the natural tide in this bay:

Slit Scan



I have no idea what's going on here... anyone know how this is done?

Sunday, February 6, 2011

Databending with 3d

What happens when you corrupt 3d data? Some work with Poser by Krista Hoefle, found in the Flickr glitch art pool:


Thursday, February 3, 2011

3D modeling and texture mapping from video camera



Uses feature tracking on an object moving in front of a fixed video camera to extract geometry with matching texture mapping from the camera image and then using the motion of the object to control the motion of the model in software.

"Normally, scanning in 3D requires purpose-made gear and time. ProFORMA lets you rotate any object in front of the camera and it scans it in real time, building a fully 3D texture mapped model as fast as you can turn an object. Even more impressive is what happens after the scan: The camera continues to track the objsct in space and matches it’s movement instantly with the on-screen model." -- Wired: Amazing Software Turns Cheap Webcam Into Instant 3D Scanner | Gadget Lab | Wired.com

Week 1: Homework

Due next week, you have three assignments.

Make a 3d Scan
In groups of 4, make a 3d scan. The more DIY and low-tech, the better. It just can't be visualized with a Rutt-Ettra height map. Depending on your technical expertise you may:
  1. Invent a 3d scanner. It could be super-low-resolution, or single-image instead of realtime.
  2. Reimplement a technique that already exists, like using a shadow, laser, or binary subdivision.
  3. Get code running for a technique that someone else created, like my structured light code or OpenCV stereo matching.
  4. Use a tool that already exists, extract and display the data in your own way. Like using Photosynth, Kinect, or David Laserscanner.
We'll share the results of your work in the next class. Because this assignment is so open ended, you can use whatever tools/environment you like. Future assignments will be written in Open Frameworks only so we can share source code.

Post a Link
Post a cool project related to 3d scanning to this blog with a sentence or two describing it.

Brainstorm for your final project
Start thinking about where you want to take all this! How does all this relate to your work? What techniques, ideas, or aesthetics are you most interested in? You don't need to turn anything in for this, but feel free to email us with questions and ideas or talk to us in person.

Class Structure

  1. This class is going to be really hard, but hopefully really fun and interesting too.
  2. Weekly smaller projects: we'll have assignments each week, sometimes in groups, sometimes individually.
  3. One final project: you'll be expected to make something "bigger" that is aligned with your practice/interests using 3d that you can share when the class is done in mid March.
  4. This blog will be used by the instructors (Zach and Kyle) for posting assignments and info, and by everyone else will be posting links and project updates.
  5. The syllabus isn't yet finished, but expect that your overall grade will be based a third on participation, a third the smaller projects, and a third your final project.

Week 1: Intro to 3d

intro to input techniques
radiohead "house of cards" video
data and source code for visualizing it, in processing
see the "making of" for more info
things people made with the data: pin art, lego visualizations...
it got engadget play but yesterday so did a video made with kinect
we have the tools, but what should we do with them?

time of flight infrared
wikipedia has a good overview
example video is 320x240, poor fps. compare to kinect at 640x480 and 30 fps. this video comes from the baumer tzg01

lidar
flying over netherlands in a plane and rendering the massive point cloud
newer (last few years) lidar can run at high speeds, e.g., SICK scanners or the velodyne scanners
velodyne was used for radiohead video (outdoor scenes and party scene)
doing a lidar scan looks something like this
can you make your own lidar scanner? maybe starting with a laser rangefinder?
lidar is similar to sonar + radar (light detection and ranging, sound navigation and ranging, radio detection and ranging). lidar uses pulses of light, radar uses pulses of radio-frequency light, sonar uses pulses of sounds. maybe you can make a diy sonar with an ultrasonic rangefinder and rotating sound-reflective plate?

structured light
major component of the radiohead video. 'no cameras or lights' is a lie. one camera, one projector, used for all the close up shots.
some of my work: started out with gray code subdivision, moved on to three phase scanning. started getting realtime results from the scanner. the process of scanning looks like this. a year ago it started running around 60 fps with some crazy hacks. i shoot and visualize 3d data for the broken social scene video. recently, i collaborated with three other artists the janus machine
the highest resolution structured light comes from debevec, regularly used in hollywood. see 'the digital emily project'
maybe the lowest resolution structured light comes from this iphone app which is actually more similar to 'shape from shading' assuming it calculates surface normals and propagates them.
you can detect edges with multiple flashes (there is a quick explanation in this generally mindblowing lecture from ramesh raskar).
if you don't put any light in the scene, you might still be able to use the known lighting somehow (shape from shading).

kinect
really, a kind of 'structured light', but using an infrared pattern
the video with 2 million views that explained to everyone what it means to have a 3d camera
primesense has developed some great skeletonization software that works with depth images.
super oldschool skeletonization: muybridge stop motion, markers, long exposure
normal approach to skeletonization: motion capture no one ever knows how to get them working
newschool is kinect. passive interaction (no markers needed) can be used for controlling robots!

coordinate measurement machines
lego 3d scanner using touch
this is how pixar did (does) it for a long time, just more expensive machines

optical triangulation
david laserscanner is not open source, but basic version is free. fairly long and involved process
lasers go well with turntables. here are some projects that use turntables with varying levels of diy.
if you need a line, you can use a wine glass stem as a lens.
or you can use the shadow of a stick instead. checkout out the byo3d course notes.
you don't have to automate the movement though, you can be the movement.
for a sort of 'orthographic triangulation', use milk or ink instead of a laser.
carefully watch friedrich's video from the eyebeam mixer.
what is implied by the process of making a 3d scan?
is it theatrical (pool of ink) or is it passive (kinect, airport 3d scanners)
do you have to pose, and what does that pose mean? (see 'hands up' essay)

stereo vision
point grey bumblebee2 camera, a couple thousand dollars.
used by golan levin for double-taker/snout (check out the debug screen)
used by joel gethin lewis for "to the music" music video for colder, and for "echo" dance perforamnce at uva.
lots of other stereo vision software out there, not just point grey. amazing collection of different algorithms and their accuracy at middlebury. opencv has great stereo matching routines, but can be slow if not tuned well.
state of the art for face scanning might be mesoscopic augmentation with stereo matching.
the main advantage of stereo over kinect is it can do daylight. the main disadvantage is that sparse scenes are hard to reconstruct, and the data is much noisier.

structure from motion
a bit like like stereo, but the camera moves over time and the scene stays still.
some are even online (realtime) like ptam.
there is a list of a bunch of sfm software at the end of the wikipedia page.
voodoo is free, boujou is $10k.

multiview stereo
photosynth can be used for large scenes (like all of barcelona) or small scenes (like a single face).
the photosynth data can be intercepted from the server and used in other software, like processing. there are probably better ways to do the export now than manually with wireshark.
photosynth is based on a tool called bundler.
the results of bundler, a sparse point cloud can be fed into cmvs to get a dense point cloud. cmvs used to be called pmvs2. it's pretty high resolution.

mri + ct scan
most hardcore use of 3d scanning: print yourself a new heart, save your life.
mri is expensive (requires massive, expensive magnets) and ct is dangerous (x-rays). not really diy. but they do give you voxels: a full 3d space that sees 'inside' things, not just the shape of their surface from one perspective
like an mri, sharks see em fields. maybe something like this (kinect + em sensor)?

other techniques
cameras have all kinds of properties that vary with respect to depth. one is defocus, and you can use this to estimate depth.

intro to output techniques

3d on 2d screens
if you're displaying 3d on a 3d screen you might get the wiggle. the wiggle can creep into the camera movements of any 3d video, because our brain doesn't really realize it's 3d otherwise.
sometimes it can be useful to have a 3d controller for a camera in an interactive context.
or maybe other cues like defocus (depth of field) can be used to show depth? i've done a little work on this inspired by discussions with open ended group.

laser cutting
olafur eliasson's housebook
jared tarbell's work, making slices and making height maps

3d printing and volumetric laser etching
sophie barret-kahn has explored a bunch of techniques including crystal etching.
services like shapeways and ponoko offer 3d printing for relatively cheap on small scales.

cnc milling
foam can be milled to create reliefs, and the reliefs can be used to make plaster molds
and high end 5-axis cnc machines can do pretty much anything you can imagine

OF workshop

here's the pirate pad from the workshop on the 30th. it's got some links in there, notes from what we talked about, and some code snippets we used to mashup ofxXmlSettingsExample with ofxKinect.