Augmented Reality, Smart Glasses and 2 MS Kinects using 3GearSystem and OpenKinect
Uses 2 x MS Kinect using 3GearSystem on top of OpenKinect drivers for hand tracking
Maps hand tracking to RGB world using RGB cameras
Co ordinates augmented reality and hand tracking from kinects
Allows for 3D hand pinch and 3D multi hand actions such as pinch to zoom in 3D
3D Facial Performance Capture using Kinect
Using the Kinect to capture data (green) of a markerless moving face (distance about 1 m, very limited coverage)
Mapping the rigid and non-rigid face motion to an animatable 3D face model (purple)
No use of the color information yet
Blink Solution brings Vodafone 3g’s Super Zoozoo to life
What do you get when you combine Super Zoozoo with a Microsoft Kinect, an awesome programmer, a digital artist, a creative genius and some kick ass visuals??
3D Depth Sculpting using copies of my own body and other objects
I became intrigued with the possibility of sculpting in 3D using the Kinect by periodically recording the nearest object to the camera in each part of the image. By taking multiple 3D snapshots at different times and then merging them to show the closest object, I can create a 3D sculpture that I can walk through. Here I merge multiple 3D video streams. http://www.youtube.com/watch?v=LKjzbyBpkM8
Here I use a "depth strobe", where I grab snapshots 1-2x a second.
Here I take a single snapshot with some furniture and then remove it, allowing me to wander through the ghost of the furniture. http://www.youtube.com/watch?v=abS7G5ZT17c
A second version of the green screen demo. This one does movies too.
3D point cloud rendered with boxes with scaling. Also superimposing 3d models into the scene (a rifle of course). Also using OSC from an ipad to control the scene. http://www.youtube.com/watch?v=4yp37U-YHv4
Extracting rotation data from real objects and mapping that to new virtual ones. Shows how I can extract the rotation of objects seen by the kinect and use that rotation to change the orientation of virtual objects within the Box2d space to create a virtual bat out of a real one!!. Notice that I have mirrored the color video stream so that it acts more like a mirror than a web cam so that I can overlay the 2d graphics onto the camera images for more realism. http://www.youtube.com/watch?v=bO3YwW3WajI
Driving Quake Live with a kinect. It uses openkinect, python bindings and web.py on the linux to expose nearest point data. The imac runs Quake and a custom java program that calls the linux web server. It uses java.awt.Robot to generate mouse and key stroke events. http://www.youtube.com/watch?v=uvP2u2yOcNw Sorry about the resolution but I'll try to upload a better one later.
Extracting rotation data from real objects and mapping that to new virtual ones. Shows how I can extract the rotation of objects seen by the kinect and use that rotation to change the orientation of virtual objects within the Box2d space to create a virtual bat out of a real one!!. Notice that I have mirrored the color video stream so that it acts more like a mirror than a web cam so that I can overlay the 2d graphics onto the camera images for more realism. http://www.youtube.com/watch?v=bO3YwW3WajI
Openframeworks, box2d, opencv and ofxkinect. This uses the depth map to determine the closest point to the kinect. It uses this point to draw a line that is part of the box2d world. This line can then be moved around by moving your hand or a magic wand (in my case a roll of string!!) so that other objects with in the 2d world can be manipulated. Works well. http://www.youtube.com/watch?v=pR46sXjEtzE
Openframeworks, Box2d and opencv. Uses the blobs generated by opencv contours to generate a box2d object to manibulate other box2d objects. Works OK but filtering the blobs is quite error prone http://www.youtube.com/watch?v=NlrKcpUPtwM