First we followed up on the homework, and talked about some of the pitfalls of basic tracking and how to reconcile them (averaging regions of depth data, using temporal smoothing).
Then we discussed gestures, which is a general way of describing higher level analysis of depth data. We focused on:
- ofxOpenNI including basic initialization, skeleton tracking, and hand tracking
- ofxFaceShift and FaceShiftOSC
We spent the last half hour looking at some work from Toshi, who is a something-in-residence at ITP and former ITP student sitting in on the class. Then we discussed ofxCv which we'll definitely get into more in the future.
The assignment this week is to pick an interesting gesture, track it using a Kinect (with any available library/toolkit: ofxKinect, ofxOpenNI, ofxFaceShift) and cause the gesture to influence something, or have some output, that is not on a screen.