This assignment will go down in infamy. I am pleased with my results, but there is still so so so so much more to explore with all this data.
So far, I have been able to calculate and draw the bounding box, and centroids. I next experimented with some blob detection and contour finding. Once I was able to get something cogent together I thought an easy way to track small blobs in 3D would be to change the threshold amount on the contour search along with my movement BACK and FORTH in the z-axis. Here I am only calculating the brightest bits so, for example, if I were tracking fingers on both hands as soon as one hand is farther back than the other, the dual track is lost. Must work on this.
For some quick interaction I decided I would use the blobs to control the pan around the 3D point cloud, which is what you are seeing here. Apologies for the jumpy graphics, I think I was getting some noise in the Kinect because it was being a lot more smooth than that.
In my research of openCV and motion tracking, I realized I knew zilch about coding for gesture recognition. I think I am ready for that challenge. I think I'll be fine though since I really didn't know how to do what I did in the video a week ago.
At the top of my list for questions tomorrow is to ask about great resources for learning how to track and utilize gestures, as well as learn more about how I might be able to track multiple discrete objects in 3D.
I have a few ideas for visualization as well, but I am getting way ahead of myself. 3D baby steps.