Thursday, March 31, 2011
Landman performance visualizer
For my final project, I created a performance visualization tool for 8-bit artist Nullsleep. The idea for the piece was to use Kinect depth information to create a changing landscape and then to have a 3D model of Nullsleep himself bounced around as the landscape shifts. To achieve this effect, I used the Bullet Physics c++ library, which is frequently used in games and procedural computer animation to create physical environments that move realistically. I navigated the incredibly complex Bullet Physics API to create a ground plane whose geometry would be determined dynamically based on the incoming depth information from the Kinect.
This project will continue to evolve as I work with Nullsleep to improve the aesthetics towards a premier at Blipfest this summer.
Landman code on Github
Differential Invariants on the depth Image (documentation in-process/unfinished)
For example, the gradient gives us the normal of the surface at point (x,y,z).
An example of finding the surface normals on the image of Lena , can be seen below. Something that needs to be noted is that the intensity values of this 2d image are treated as elevation values.
Code at: github
Final Project
Our code can be downloaded from github here.
We thought it might look nice to create some branching object that traces / follows the human body and has an anthropomorphic look.
We worked a lot on it, but the outcome was not the desired at all. Although the graphics on their own look nice / interesting, the outcome is not the one we would have liked when applied to data from the kinect.
We used the limb "begin / end" locations from the skeleton to create a set of 3D paths that our branching algorithm should follow. For the path following, we used Daniel Shiffman's path following Processing example that we ported to openFrameworks in 3D instead of 2D.
Some of the results follow:
On its own, our algorithm produces a visually rich outcome and this can be seen on the following image
Preliminary Music Visualizer
Next Steps:
1) Add Color
2) Work more on camera positioning and rotation
3) Automatic triggering of visual effects and rotation
4) Gesture control to trigger specific transformation sequences
5) Improve the pause/delay function for multiple point clouds
6) Experiment with different ways to map the FFT data to the point cloud.
Source Code:
http://itp.nyu.edu/~mk3321/3dsav/visualizer.zip
Building a sculpture with Kinect IR structured light
Objectifying Breath
Untitled from Diana Huang on Vimeo.
Final project and all documentation
Wednesday, March 30, 2011
Molly Recap
then adding dot particles from a center point, when a person moved passed a z space threshold.
In the course of things I broke a lot of projects, especially moving back and forth between 062 and 007, but that's been really useful for getting sea legs in OF. But I spent most of the semester working on tests for a larger project called The Hidden Kingdom. My final for 3dsav includes getting spheres set up in a 3d space aligned with the Kinect space, defining boxes of space to determine interactions with people, starting treatment of the lighting, and reaction of the spheres. When a person comes in contact with a cube of space, all of the spheres in that space turn red and start to wobble upward... Code is here!
In action |
Testing with Interaction Cubes Outlined |
hiddenkingdomtestt from Molly Schwartz on Vimeo.
Holographic Warpaint
Last spring I worked on a play. It was an adaptation of Samuel Delany's epic science fiction novel, Dhalgren. This book is crazy, and the play was crazy - I did the sound design and worked on the video as well. This is the kind of book that sticks with you (the wild dense prose, the imagery, the....extremely detailed pornographic sex), and there were many things from the book that weren't realized in the play and I'm holding onto them. The book is set here - well, in a city somewhere in America - after an unnamed disaster has taken place. The city is a wasteland, but people are still living there. They live for free in parks, or squat in apartments where nothing works.
Gangs, known as the Scorpions, run the streets. This is the element I'm thinking about. Members of the Scorpions wear projector necklaces. When they press a button on the projector a holographic animal surrounds their bodies. Like holographic warpaint. One of the characters is known as Dragon Lady, because her projection is a dragon. One of them is a baby dinosaur - which I love. One of them doesn't work correctly and appears as an amorphous blob. I think it's weird that I can't find an image of this somewhere. I feel like it's one of the most memorable images from the book - gangs of fierce, oversized, holographic animals walking through the streets.
So, I made a failed attempt at this last semester in ICM using color tracking with lame colored LEDs strapped to my body. When the Kinect came out, I knew it was a solution, which is why I'm in this class.
I had previously envisioned a solid, neon colored animal shape for these shields, and thought of using skeleton tracking with OpenNI to animate a 3D character. I was nervous about the animated character, though, and pretty sure it would look dumb.
A simple, and I think effective, solution occurred to me late in the game. I reimagined the design of the holograms - they could be skinned as the creatures rather than shaped like them. I modified an example from class to remove background information, then map pixels from existing images to the depth image from the Kinect. I projected this onto two layers of mesh that I stood behind, producing a faux 3D projection effect. I tried a couple images - two dinosaurs and a lizard.
Here's a diagram of my setup:
This is a study for an effect to be used in a live performance.
Find the code here
Zach Recap
Frankie: Recap
Created using openframeworks, the Microsoft Kinect and OpenNI Budget Climb is a physically interactive data environment where we can explore 26 years of federal spending - giving us a unique perspective on how our government spends our money. In order to explore the data we must exert physical effort, revealing how the budget is distributed in a novel and tangible way.
Tuesday, March 29, 2011
Kinect Abnormal Motion Assessment System
Here's a video of a patient with sydeham corea, an example of one of these debilitating disorders:
We used the skeleton data from the Kinect, accessed via OSCeleton, in order to automate an existing test associated with these disorders, the Ames Involuntary Motion Scale. In this test, patients are instructed to sit still in a fixed position with their hands between their knees and then the doctors evaluates the amount the move on a subjective scale. Our application measured the position of the hands and knees in three dimensions and then added up the amount of motion those points underwent over a ten second testing period. Here's an example of what the application looks like:
Our team won the hackday and were invited to travel to San Diego to compete in the national Health 2.0 hackday. We presented out application again there and won that competition as well.
We are currently working on plans for a scientific study to validate this measurement approach as well as exploring commercial options for developing the application. More information about our application and motion disorders in general is available here: motionassessment.com
Monday, March 28, 2011
Homunculus
Homunculus is a video self-portrait that explores facial expressions and physical performance. In it, I use the position of my body to puppet a 3D model of my own head. Each limb is mapped to a particular part of the face that plays a role in determining the emotional expressiveness of a facial expression: my hands control my brows, my knees control the corners of my mouth, etc.
The result is that small facial movements that distinguish different emotional expressions — a raised eyebrow, a curled lip, a brow furrow — get amplified into the large scale movements of my whole body. To achieve particular expressions such as surprise, contentment, anguish, I'm forced to contort my body into absurd positions that bear little expressive relationship to the emotion being expressed by the puppet.
The process of designing the interface, of configuring the precise mapping between skeleton joints and areas of the 3D model, also required intensive attention on which parts of my face move when making each facial expression. And likewise the process of hand-building the 3D model of my face required diligent attention to the construction of my face.
Technically, the application access the skeleton data via OSCeleton and it loads up the 3D model (created in Cinema 4D) as an obj file. The code is available on GitHub: Head-Puppet. Here is a good tutorial for getting up and running with OSCeleton on OS X.
http://www.vimeo.com/21576570
Saturday, March 26, 2011
Time Travellers
Time Travellers is a real-time video mirror currently installed at NYU’s Interactive Telecommunications Program. The Microsoft Kinect is used to take a “depth image” of the viewer and map it to time on a source video. The closer the viewer is to the camera, the later in time is the video.
Created in openFrameworks. Source code available here.
Kinect VJ and Visualization Tool - FINAL
Here is a link to my github repo where I have my code as it was during my presentation of the final. I a doing some serious updating of the code today (comments, getting rid of extraneous code, etc). so if you download the stuff today, make sure you come back soon to get the updated code, which will be a billion times better.
Also, stay tuned for full scale documentation of the project. I highly recommend Syphon for screen capture (follow Toby's email about setting it up). I used it yesterday and it worked great.
gity up: https://github.com/dmak78/kinectVJ
some videos: http://vimeo.com/user4751444
So again, this is not my final documentation, but I wanted to make sure the code was up on github and that anyone that wants to peep it out can.
Kevin
Thursday, March 24, 2011
Final summary - Yang Liu
This is the LINK to my post.
Tuesday, March 22, 2011
Aligning ofxOpenNI Skeleton and Point Cloud
depth_generator->getXnDepthGenerator().
ConvertRealWorldToProjective(2, pos, pos);
If you are computing the point cloud with a flipped y axis, you also need to flip the skeleton at this point:
pos[0].Y *= -1;
pos[1].Y *= -1;
From here, the data is ready to be used. If you want to see it, you need to change one more thing. Inside ofxTrackedUser.h, in ofxLimb::debugDraw():
glVertex2f(begin.x, begin.y);
glVertex2f(end.x, end.y);
Needs to be changed to:
glVertex3f(begin.x, begin.y, begin.z);
glVertex3f(end.x, end.y, end.z);
Monday, March 14, 2011
Reconstructing a Mesh from a Point Cloud
I posted a video describing one way to reconstruct a mesh from a point cloud in Meshlab, based on some info at the Meshlab blog.
Poisson Reconstruction in Meshlab from Kyle McDonald on Vimeo.
And I got a bunch of great tips from Sophie Barret-Kahn: here's an academic paper reporting on the different software that's available.
Rhino has a lot of tools for meshing, re-meshing, and surfacing (making parametrized functions that describe the mesh). Here's one for working with a point cloud:
There's more info on the Rhino tools here.
If you're more of a nerd, Matlab has some good low-level tools for handling this kind of data.
Finally, Blender has its own tools for dealing with mesh reconstruction. Taylor Goodman, who developed a structured light scanner for Makerbot, has a tutorial describing how to reconstruct a mesh for 3d printing from a point cloud:
I think there is a script for this on blenderartists but the site is broken at the moment.
Friday, March 11, 2011
Noise in the Kinect Depth Image
Thursday, March 10, 2011
Kinect + CUDA
KinectCudaTest from Voxels on Vimeo.
Progression from Hello World
Monday, March 7, 2011
Sunday, March 6, 2011
3D Fractals from GLSL
Thursday, March 3, 2011
3D SelfPortrait by Eric Testroete
so obvious....yet...disturbing......and beautiful...
whole process
Flocking as a series of matrix operations
This week, I’ve been getting a grasp on the Eigen BLAS library for C++ in order to convert Robert Hodgin’s Cinder flocking tutorial into a linear algebra operation. This is intended to be an intermediary step as I move towards flocking as a GPGPU calculation. My guess is that if I can nail down the order of operations as matrices, it will lend itself to multithreaded and highly parallel processing.
So far, I have rewritten the separation algorithm as well as the gravitational pull towards the origin. There’s an unexpected interaction between boids at close range which I cannot explain, even after comparing the matrix operations and the traditional code in calculations by hand, but they do seem to right themselves after a bit of a tango.
In addition to rewriting the flocking algorithm, I have attempted to fold in the OpenNI skeleton interaction and an OpenGL shader pipeline with limited success. The OpenGL shaders compile, but I haven’t gotten to getting anything interesting to work (not even basic lighting), mostly because I’ve spent several days squashing mathematical bugs in the flocking code. I did manage to hack in the OpenNI skeleton and use it as a repelling force to particles that are influenced by the separation code. This will probably look a lot more interesting when the rest of the flocking code is implemented, and I have some point lights attached to the skeleton joints.
To conjoin the behavior of the boids with the skeleton, I expanded the size of the position matrix to include 15 additional columns, which hold the positions of the joints. Before user tracking begins, these points are randomly distributed, but once the user is obtained, the positions are overridden and are controllable. There are all kinds of problems with the render: scaling being the most obvious, but also some tearing in the frames. I’m also concerned that by scaling down to a world of about 10 units, I’m running into floating point nonsense. I’m trying to negotiate another problem that I’m having understanding the aperture and focal length of my stereoization example code.
I’ll continue to work on it this weekend by first finishing the flocking code and then trying to render with some materials and lighting. Here are some notes on the matrices:
Wednesday, March 2, 2011
VJing with the Kinect
I was invited to hook up the Kinect, point it at the DJ, and control the visuals with a MIDI controller, using OSC, into openframeworks.
VJing in 3D with Kinect from Kevin Bleich on Vimeo.