Thursday, March 31, 2011

Landman performance visualizer



For my final project, I created a performance visualization tool for 8-bit artist Nullsleep. The idea for the piece was to use Kinect depth information to create a changing landscape and then to have a 3D model of Nullsleep himself bounced around as the landscape shifts. To achieve this effect, I used the Bullet Physics c++ library, which is frequently used in games and procedural computer animation to create physical environments that move realistically. I navigated the incredibly complex Bullet Physics API to create a ground plane whose geometry would be determined dynamically based on the incoming depth information from the Kinect.

This project will continue to evolve as I work with Nullsleep to improve the aesthetics towards a premier at Blipfest this summer.

Landman code on Github

Differential Invariants on the depth Image (documentation in-process/unfinished)

One of the most appealing things to me about the kinect is that we can use the full arsenal of differential geometry to analyze and extract features from the detected surface.

For example, the gradient gives us the normal of the surface at point (x,y,z).

An example of finding the surface normals on the image of Lena , can be seen below. Something that needs to be noted is that the intensity values of this 2d image are treated as elevation values.
Code at: github

Final Project

For the final project we worked with Eszter.
Our code can be downloaded from github here.

We thought it might look nice to create some branching object that traces / follows the human body and has an anthropomorphic look.

We worked a lot on it, but the outcome was not the desired at all. Although the graphics on their own look nice / interesting, the outcome is not the one we would have liked when applied to data from the kinect.

We used the limb "begin / end" locations from the skeleton to create a set of 3D paths that our branching algorithm should follow. For the path following, we used Daniel Shiffman's path following Processing example that we ported to openFrameworks in 3D instead of 2D.
Some of the results follow:


On its own, our algorithm produces a visually rich outcome and this can be seen on the following image

Preliminary Music Visualizer

For my final project in 3dSav, I wanted to make a music visualizer from the kinect point cloud data.  To analyze the music being played, I used the oF FFT visualizer example. I then mapped FFT output data to the kinect 3d depth image, adding or subtracting to the depth based on the FFT data from the music playing.  To make the visualizer a little more interesting, I added a feature that multiplies the dancer's body in the XY and Z directions.



Next Steps:

1) Add Color
2) Work more on camera positioning and rotation
3) Automatic triggering of visual effects and rotation
4) Gesture control to trigger specific transformation sequences
5) Improve the pause/delay function for multiple point clouds
6) Experiment with different ways to map the FFT data to the point cloud.

Source Code:
http://itp.nyu.edu/~mk3321/3dsav/visualizer.zip

Building a sculpture with Kinect IR structured light



Sculpture Work by Molmol
with openFrameworks + meshlab + solidworks




Objectifying Breath

For our final project, Ivana and I wanted to focus on the idea of transferring breath to objects. With the kinect, openNI, a weighted piezo sensor and the arduino, the user breaths a pattern of breath, captured by the piezo and the rhythm and speed is captured on screen. 'Breath' is depicted as spherical objects located throughout the digital environment. Upon breathing into the piezo, an object of breath 'attaches' to the user, and the user is able to interact and breath with other objects of breath around them. As the user approaches the object, the pattern of breath is transferred to the object, as they back away, the pattern diminishes.

Below is a video capturing the experience:




Untitled from Diana Huang on Vimeo.

Final project and all documentation

Greetings class, I just finished posting all the code and videos for all assignments, including the final with Matt, on my blog. Here's the link: http://nismazaman.com/itp/?cat=16.

Wednesday, March 30, 2011

Molly Recap

My first step into 3dsav was the destructive marzipan scan with Shahar.  I did some simple projects, starting from the depth visualizer code: creating a cube that changed colors,


then adding dot particles  from a center point, when a person moved passed a z space threshold.



In the course of things I broke a lot of projects, especially moving back and forth between 062 and 007, but that's been really useful for getting sea legs in OF.  But I spent most of the semester working on tests for a larger project called The Hidden Kingdom.  My final for 3dsav includes getting spheres set up in a 3d space aligned with the Kinect space, defining boxes of space to determine interactions with people, starting treatment of the lighting, and reaction of the spheres.  When a person comes in contact with a cube of space, all of the spheres in that space turn red and start to wobble upward... Code is here!

In action



Testing with Interaction Cubes Outlined





hiddenkingdomtestt from Molly Schwartz on Vimeo.

Holographic Warpaint



Last spring I worked on a play. It was an adaptation of Samuel Delany's epic science fiction novel, Dhalgren. This book is crazy, and the play was crazy - I did the sound design and worked on the video as well. This is the kind of book that sticks with you (the wild dense prose, the imagery, the....extremely detailed pornographic sex), and there were many things from the book that weren't realized in the play and I'm holding onto them. The book is set here - well, in a city somewhere in America - after an unnamed disaster has taken place. The city is a wasteland, but people are still living there. They live for free in parks, or squat in apartments where nothing works.

Gangs, known as the Scorpions, run the streets. This is the element I'm thinking about. Members of the Scorpions wear projector necklaces. When they press a button on the projector a holographic animal surrounds their bodies. Like holographic warpaint. One of the characters is known as Dragon Lady, because her projection is a dragon. One of them is a baby dinosaur - which I love. One of them doesn't work correctly and appears as an amorphous blob. I think it's weird that I can't find an image of this somewhere. I feel like it's one of the most memorable images from the book - gangs of fierce, oversized, holographic animals walking through the streets.

So, I made a failed attempt at this last semester in ICM using color tracking with lame colored LEDs strapped to my body. When the Kinect came out, I knew it was a solution, which is why I'm in this class.

I had previously envisioned a solid, neon colored animal shape for these shields, and thought of using skeleton tracking with OpenNI to animate a 3D character. I was nervous about the animated character, though, and pretty sure it would look dumb.

A simple, and I think effective, solution occurred to me late in the game. I reimagined the design of the holograms - they could be skinned as the creatures rather than shaped like them. I modified an example from class to remove background information, then map pixels from existing images to the depth image from the Kinect. I projected this onto two layers of mesh that I stood behind, producing a faux 3D projection effect. I tried a couple images - two dinosaurs and a lizard.

Here's a diagram of my setup:

This is a study for an effect to be used in a live performance.






Find the code here

source

Zach Recap

Above, our 3d visualization of the federal budget, more info here.

Above, a triangle rendering, experimenting with the kinect and opengl, I used triangles of random sizes instead of quads to draw a mesh.
Above, pong head, a game of pong that uses the kinect to control the paddles, and the loser's face becomes the ball. And more 3DSAV projects of mine here.




How to Disappear Completely + Dissolution



Fred Truman: Recap




Milk Scanner



Hit Test



OpenGL Lines



Triangle Fan Glitch Art



Budget Climb

Frankie: Recap


I came into the class with no prior knowledge of 2d computer vision or 3d graphics, so it's been a great learning experience. The class was a fast paced overview of 2d computer vision techniques as well as an exploration of 3d sensing and visualization. It was exciting and I feel motivated to delve deeper into many of the topics we covered when I have more time in the future.


During the first week, I worked with Kevin and Nisma on hacking a camera into an IR scanner.

The bounding box (cube) and centroid assignment was a good intro into translating 2d computer vision exercises into a 3d context.


















Creating an object in 3d space and making it into a "switch" of sorts gave me a taste of what types of interactions were possible in 3d space.





















I was also exposed to OpenGL and shaders for the first time, and had some fun experimenting with the various types of meshes. My favorite was the triangle fan mesh.






















For the final project, I worked with Zach and Fred to create Budget Climb, project that brought together our interests in data visualization and interactions in 3d space:

Created using openframeworks, the Microsoft Kinect and OpenNI Budget Climb is a physically interactive data environment where we can explore 26 years of federal spending - giving us a unique perspective on how our government spends our money. In order to explore the data we must exert physical effort, revealing how the budget is distributed in a novel and tangible way.

budgetclimb.com

budgetclimb github repo


Tuesday, March 29, 2011

Kinect Abnormal Motion Assessment System

This february, at the Health 2.0 hackathon in Boston, I worked with a team of volunteers including a psychiatric resident and a number of masters of public health to build a prototype of a system to track hyperkinetic motion disorders. These are a class of neuromuscular disorders, frequently caused as a side effect of psychiatric drugs, where patients' bodies move involuntarily. They include tremors as well as more violent actions. They can range from uncomfortable to debilitating.

Here's a video of a patient with sydeham corea, an example of one of these debilitating disorders:



We used the skeleton data from the Kinect, accessed via OSCeleton, in order to automate an existing test associated with these disorders, the Ames Involuntary Motion Scale. In this test, patients are instructed to sit still in a fixed position with their hands between their knees and then the doctors evaluates the amount the move on a subjective scale. Our application measured the position of the hands and knees in three dimensions and then added up the amount of motion those points underwent over a ten second testing period. Here's an example of what the application looks like:



Our team won the hackday and were invited to travel to San Diego to compete in the national Health 2.0 hackday. We presented out application again there and won that competition as well.

We are currently working on plans for a scientific study to validate this measurement approach as well as exploring commercial options for developing the application. More information about our application and motion disorders in general is available here: motionassessment.com

Monday, March 28, 2011

Homunculus



Homunculus is a video self-portrait that explores facial expressions and physical performance. In it, I use the position of my body to puppet a 3D model of my own head. Each limb is mapped to a particular part of the face that plays a role in determining the emotional expressiveness of a facial expression: my hands control my brows, my knees control the corners of my mouth, etc.

The result is that small facial movements that distinguish different emotional expressions — a raised eyebrow, a curled lip, a brow furrow — get amplified into the large scale movements of my whole body. To achieve particular expressions such as surprise, contentment, anguish, I'm forced to contort my body into absurd positions that bear little expressive relationship to the emotion being expressed by the puppet.

The process of designing the interface, of configuring the precise mapping between skeleton joints and areas of the 3D model, also required intensive attention on which parts of my face move when making each facial expression. And likewise the process of hand-building the 3D model of my face required diligent attention to the construction of my face.

Technically, the application access the skeleton data via OSCeleton and it loads up the 3D model (created in Cinema 4D) as an obj file. The code is available on GitHub: Head-Puppet. Here is a good tutorial for getting up and running with OSCeleton on OS X.

http://www.vimeo.com/21576570


This is a first person view of the point cloud. View is controlled through OpenNI

Everything looks quicker because of framerate glitches (video was captured from oF directly with ofxQtSaver)

Code will go to GitHub later.

Saturday, March 26, 2011

Time Travellers



Time Travellers is a real-time video mirror currently installed at NYU’s Interactive Telecommunications Program. The Microsoft Kinect is used to take a “depth image” of the viewer and map it to time on a source video. The closer the viewer is to the camera, the later in time is the video.



Created in openFrameworks. Source code available here.

Kinect VJ and Visualization Tool - FINAL

Hey All!

Here is a link to my github repo where I have my code as it was during my presentation of the final. I a doing some serious updating of the code today (comments, getting rid of extraneous code, etc). so if you download the stuff today, make sure you come back soon to get the updated code, which will be a billion times better.

Also, stay tuned for full scale documentation of the project. I highly recommend Syphon for screen capture (follow Toby's email about setting it up). I used it yesterday and it worked great.

gity up: https://github.com/dmak78/kinectVJ

some videos: http://vimeo.com/user4751444

So again, this is not my final documentation, but I wanted to make sure the code was up on github and that anyone that wants to peep it out can.

Kevin

Thursday, March 24, 2011

Final summary - Yang Liu

In this class, I mainly focused on the basic technique on the assignment. And for the final, I integrated these techniques with one of my video games made in Processing.

This is the LINK to my post.

How to Disappear Completely (teaser)

Tuesday, March 22, 2011

Aligning ofxOpenNI Skeleton and Point Cloud

Currently, the ofxOpenNI addon puts the skeleton in projective space instead of leaving it in real space. To change this, remove the following line from ofxTrackedUser.cpp inside ofxTrackedUser::updateLimb():


depth_generator->getXnDepthGenerator().
ConvertRealWorldToProjective(2, pos, pos);


If you are computing the point cloud with a flipped y axis, you also need to flip the skeleton at this point:


pos[0].Y *= -1;
pos[1].Y *= -1;


From here, the data is ready to be used. If you want to see it, you need to change one more thing. Inside ofxTrackedUser.h, in ofxLimb::debugDraw():




glVertex2f(begin.x, begin.y);
glVertex2f(end.x, end.y);


Needs to be changed to:


glVertex3f(begin.x, begin.y, begin.z);
glVertex3f(end.x, end.y, end.z);

Monday, March 14, 2011

Reconstructing a Mesh from a Point Cloud

I posted a video describing one way to reconstruct a mesh from a point cloud in Meshlab, based on some info at the Meshlab blog.



Poisson Reconstruction in Meshlab from Kyle McDonald on Vimeo.



And I got a bunch of great tips from Sophie Barret-Kahn: here's an academic paper reporting on the different software that's available.



Rhino has a lot of tools for meshing, re-meshing, and surfacing (making parametrized functions that describe the mesh). Here's one for working with a point cloud:





There's more info on the Rhino tools here.



If you're more of a nerd, Matlab has some good low-level tools for handling this kind of data.



Finally, Blender has its own tools for dealing with mesh reconstruction. Taylor Goodman, who developed a structured light scanner for Makerbot, has a tutorial describing how to reconstruct a mesh for 3d printing from a point cloud:





I think there is a script for this on blenderartists but the site is broken at the moment.

Friday, March 11, 2011

Noise in the Kinect Depth Image

I've been looking into the noise that you get in the depth images that come from Kinect. I've found two good references so far: Kinect Z Buffer Noise and Audio Beam Steering Precision and Experiment to remove noise in Kinect depth maps. The general consensus seems to be that the error in the Kinect images cannot simply be averaged out over time, and that it has to do with some kind of quantization noise in the stereo matching algorithm. Also, most of the noise is in the center of the depth range at a few meters away. There might be some way to remove the quantization noise if it's constant with respect to the 2d image -- if it's constant with respect to the 3d space, it would be way too intensive to sample.

Thursday, March 10, 2011

Kinect + CUDA

These are the first captures of a prototype combination of NVidia's CUDA SDK smokeParticles example with the OpenNI NiViewer. The sphere of particles follows the right hand. The first and longest video is the movement captured while listening to Way Out West's One Bright Night (Instrumental), which is played as the soundtrack.

KinectCudaTest from Voxels on Vimeo.



Progression from Hello World

Sunday, March 6, 2011

3D Fractals from GLSL

3D fractals written in GLSL running through WebGL on Chrome.
http://www.subblue.com/blog/2011/3/5/fractal_lab


Thursday, March 3, 2011

3D SelfPortrait by Eric Testroete

Zach mentioned this in the class...I don't know if he showed this particular example.....

so obvious....yet...disturbing......and beautiful...




whole process

A Good Source for OpenGL Examples...

ENSIMAG

Flocking as a series of matrix operations

This week, I’ve been getting a grasp on the Eigen BLAS library for C++ in order to convert Robert Hodgin’s Cinder flocking tutorial into a linear algebra operation. This is intended to be an intermediary step as I move towards flocking as a GPGPU calculation. My guess is that if I can nail down the order of operations as matrices, it will lend itself to multithreaded and highly parallel processing.

So far, I have rewritten the separation algorithm as well as the gravitational pull towards the origin. There’s an unexpected interaction between boids at close range which I cannot explain, even after comparing the matrix operations and the traditional code in calculations by hand, but they do seem to right themselves after a bit of a tango.

In addition to rewriting the flocking algorithm, I have attempted to fold in the OpenNI skeleton interaction and an OpenGL shader pipeline with limited success. The OpenGL shaders compile, but I haven’t gotten to getting anything interesting to work (not even basic lighting), mostly because I’ve spent several days squashing mathematical bugs in the flocking code. I did manage to hack in the OpenNI skeleton and use it as a repelling force to particles that are influenced by the separation code. This will probably look a lot more interesting when the rest of the flocking code is implemented, and I have some point lights attached to the skeleton joints.

To conjoin the behavior of the boids with the skeleton, I expanded the size of the position matrix to include 15 additional columns, which hold the positions of the joints. Before user tracking begins, these points are randomly distributed, but once the user is obtained, the positions are overridden and are controllable. There are all kinds of problems with the render: scaling being the most obvious, but also some tearing in the frames. I’m also concerned that by scaling down to a world of about 10 units, I’m running into floating point nonsense. I’m trying to negotiate another problem that I’m having understanding the aperture and focal length of my stereoization example code.

I’ll continue to work on it this weekend by first finishing the flocking code and then trying to render with some materials and lighting. Here are some notes on the matrices:

http://yfrog.com/h251863569j





Wednesday, March 2, 2011

VJing with the Kinect

This last Friday, I was invited to tag along with Ryan Uzilevsky, who I intern for at his company Light Harvest, to a VJ gig he had at a big ole' techno party near Columbus Circle. It featured DJs Wolf+Lamb. They rocked the house.

I was invited to hook up the Kinect, point it at the DJ, and control the visuals with a MIDI controller, using OSC, into openframeworks.

VJing in 3D with Kinect from Kevin Bleich on Vimeo.

Time Travellers


The Kinect depth image is mapped to time in a time-lapse video of NYC. The closer you are to the camera, the later in time is the video.

Original video by Erik Paulsen.