3D Depth Sculpting using copies of my own body and other objects
I became intrigued with the possibility of sculpting in 3D using the Kinect by periodically recording the nearest object to the camera in each part of the image. By taking multiple 3D snapshots at different times and then merging them to show the closest object, I can create a 3D sculpture that I can walk through. Here I merge multiple 3D video streams. http://www.youtube.com/watch?v=LKjzbyBpkM8
Here I use a "depth strobe", where I grab snapshots 1-2x a second.
Here I take a single snapshot with some furniture and then remove it, allowing me to wander through the ghost of the furniture. http://www.youtube.com/watch?v=abS7G5ZT17c
A second version of the green screen demo. This one does movies too.
3D point cloud rendered with boxes with scaling. Also superimposing 3d models into the scene (a rifle of course). Also using OSC from an ipad to control the scene. http://www.youtube.com/watch?v=4yp37U-YHv4
Extracting rotation data from real objects and mapping that to new virtual ones. Shows how I can extract the rotation of objects seen by the kinect and use that rotation to change the orientation of virtual objects within the Box2d space to create a virtual bat out of a real one!!. Notice that I have mirrored the color video stream so that it acts more like a mirror than a web cam so that I can overlay the 2d graphics onto the camera images for more realism. http://www.youtube.com/watch?v=bO3YwW3WajI
Driving Quake Live with a kinect. It uses openkinect, python bindings and web.py on the linux to expose nearest point data. The imac runs Quake and a custom java program that calls the linux web server. It uses java.awt.Robot to generate mouse and key stroke events. http://www.youtube.com/watch?v=uvP2u2yOcNw Sorry about the resolution but I'll try to upload a better one later.
Extracting rotation data from real objects and mapping that to new virtual ones. Shows how I can extract the rotation of objects seen by the kinect and use that rotation to change the orientation of virtual objects within the Box2d space to create a virtual bat out of a real one!!. Notice that I have mirrored the color video stream so that it acts more like a mirror than a web cam so that I can overlay the 2d graphics onto the camera images for more realism. http://www.youtube.com/watch?v=bO3YwW3WajI
Openframeworks, box2d, opencv and ofxkinect. This uses the depth map to determine the closest point to the kinect. It uses this point to draw a line that is part of the box2d world. This line can then be moved around by moving your hand or a magic wand (in my case a roll of string!!) so that other objects with in the 2d world can be manipulated. Works well. http://www.youtube.com/watch?v=pR46sXjEtzE
Openframeworks, Box2d and opencv. Uses the blobs generated by opencv contours to generate a box2d object to manibulate other box2d objects. Works OK but filtering the blobs is quite error prone http://www.youtube.com/watch?v=NlrKcpUPtwM
Actually...this isn't using OpenKinect. I'm using CL NUI at the moment which is in C#, but I will switch over to libfreenect c++ version on Mac soon...
Part 1: Example code for how to grab Kinect observations, do multi-threading, convert range data to a 3D point cloud and render in real-time. More info here.
Part 2: Very simple demonstration of how to grab Kinect observations, perform visual feature tracking and build a 3D map (SLAM). More info here.
TuioKinect tracks simple hand gestures using the Kinect controller and sends control data based on the TUIO protocol. This is a preliminary proof of concept implementation, which still needs several improvements to become fully usable. Nevertheless it should work out of the box with most TUIO client applications. You can download the source code and a mac binary from its page
The Therenect is a virtual Theremin, which defines two virtual antenna points that allow controlling the pitch and volume of a simple oscillator. The distance to these points can be controlled by freely moving the hand in three dimensions or by reshaping the hand, which allows gestures that should be quite similar to playing an actual Theremin.
Switch On The Code
Kinect getting started tutorial using libfreenect and the C# wrapper.
Demonstrates how to display the RBG data and depth data as a grayscale depth map.
MikuMikuDance is a free dance simulator that uses takes its own file format for 3d human characters, and performs a dance with it by editing the position of "bones" inside the model, frame by frame
It is a companion program to a nonfree singing text-to-speech program called Vocaloid
The later versions can match the 3d animated character's position to Kinect motion data
Gestural Interaction for Training Simulations
We used the Microsoft Kinect to create a simple free-handed interface for navigating a 3D world and performing triage.
We also developed a walking system for physical exertion.
We actually use the OpenNI software with the driver package provided by avin and the Unity wrapper provided by tinkerer.