Notice: MediaWiki has been updated. Report any rough edges to

Difference between revisions of "Gallery"

From OpenKinect
Jump to: navigation, search
(Moved video here that someone put on main page)
Line 322: Line 322:
=== Kinect running a website ===
First Kinect compatible website

Revision as of 22:37, 7 December 2010

This page collects videos of various people doing tests and experiments with OpenKinect software.

3D Depth Sculpting using copies of my own body and other objects

I became intrigued with the possibility of sculpting in 3D using the Kinect by periodically recording the nearest object to the camera in each part of the image. By taking multiple 3D snapshots at different times and then merging them to show the closest object, I can create a 3D sculpture that I can walk through.

Here I use a "depth strobe", where I grab snapshots 1-2x a second

Here I take a single snapshot with some furniture and then remove it, allowing me to wander through the ghost of the furniture:

Here I update the 3D background continuously, creating a slur of objects:

Created in Python using the Python wrapper.

Damien People Detection

Real-time people detection using two kinect

Florian Echtler (floemuc)

"Multi-touch" interactions

Created with the libTISCH multitouch library:

Pete & Matt

Fun with a "Kinect piano"!

A vastly improved piano:

(Original version for posterity:)


3D depth video capture measuring objects in 3D

koabi's work

Diarmuid Wrenne

A second version of the green screen demo. This one does movies too.

3D point cloud rendered with boxes with scaling

Also superimposing 3d models into the scene (a rifle of course)

Also using OSC from an ipad to control the scene

3D Point cloud viewer

Extracting rotation data from real objects and mapping that to new virtual ones.

Shows how I can extract the rotation of objects seen by the kinect and use that rotation to change the orientation of virtual objects within the Box2d space to create a virtual bat out of a real one!!.

Notice that I have mirrored the color video stream so that it acts more like a mirror than a web cam so that I can overlay the 2d graphics onto the camera images for more realism.

Driving Quake Live with a kinect It uses openkinect, python bindings and on the linux to expose nearest point data. The imac runs Quake and a custom java program that calls the linux web server. It uses java.awt.Robot to generate mouse and key stroke events.

Sorry about the resolution but I'll try to upload a better one later.

Openframeworks, box2d, opencv and ofxkinect

This uses the depth map to determine the closest point to the kinect. It uses this point to draw a line that is part of the box2d world. This line can then be moved around by moving your hand or a magic wand (in my case a roll of string!!) so that other objects with in the 2d world can be manipulated. Works well.

Openframeworks, Box2d and opencv

Uses the blobs generated by opencv contours to generate a box2d object to manibulate other box2d objects. Works OK but filtering the blobs is quite error prone

Using to web enable a kinect on linux.

Kyle point cloud viewer

initial render in processing:

viewer substituting glview.c:

advanced viewer with DOF using ofxKinect:


Point cloud plus color


Awesome object recognition! and

Lightsaber tracking and rendering

Memo Akten

Drawing in 3D


Moving the motor with a GUI while showing video streams:

Plotting accelerometer to graph and calibration (Uses CL NUI (Will switch later))

Using kinect as a mouse (Uses CL NUI (Will switch later))

Kinect used to control mouse (v2) with source:

Turning TV into touch screen TV using Kinect:

Another KinectTouch demo improved source fully commented:

Single object detection and improved framerate:

Get latest source code of mine from:


Full image of the IR dot field:


Head tracking for 3D vision using OpenKinect software:

James Patten

Multitouch Tracking on arbitrary planes with Kinect


Box cloud

Daniel Reetz

Removing the IR cut filter from your camera and examining the Kinect IR field, including high res image:

Emily Gobeille - Theo Watson

Digital puppetry with Kinect


Two Kinects facing same direction, experiments:[1] [2] [3]

Rolling shutter experiments: Setup:

The top image is a plain kinect, the other one has a rolling shuter in front of the laser projector. Both kinect operate at the same time.

Holding the kinect automatically on level: [4]


Fist detection

Zephod / Stijn Kuipers

Windows RBG to Z Projection

netpro2k / Dominick D'Aniello

Quick background removal and parallax demo

3D object manipulation

Kinect Kart


Controlling Windows 7 applications with Kinect


The video that started it all
Laser projection on a moving surface in perspective, using the Kinect for tracking


"AS3 first communication"

"AS3 second phase, motor & accelerometers"

"AS3 Very simple hand tracking"

"AS3 Multiple hand tracking"

Eric Gradman (egradman)

"Standard gravity" art project using libfreenect and python

Scott Byrns

Point Cloud in Java

Poor-mans Object Tracking in Java

Ben X Tan

Actually...this isn't using OpenKinect. I'm using CL NUI at the moment which is in C#, but I will switch over to libfreenect c++ version on Mac soon...

Jean-Marc Pelletier - Nenad Popov - Andrew Roth

jit.freenect.grab: A Max/Jitter external to grab data from the Kinect.

Phillipp Robbel

Early experiments with a Microsoft Kinect depth camera on a mobile robot base

Jasper Brekelmans

Kinect 3D PointCloud Scanner

Thomas Hansen

Segmenting point cloud pixels based on depth ranges

Preston Holmes (ptone)

Control of DMX based lighting via Kinect

Controlling GarageBand:

Peter Nash (pedronash)

Playing around with Processing and Java - seeing what's fun. My experience so far is mostly in smart calibration.

Willow Garage

Several projects including multiple kinect integration and teleoperation of robots. Read more at video:

Henry Chu

Reflection - art exhibit Interacting with a three-second delayed shadow of yourself

Parag K Mital

Sound modulating point clouds:


Achim Kern

Quick test using Kinect to control a digital pin-board in TouchDesigner.

John Stowers

Dongchul Kim (T9T9)

Integration in MRPT: Very simple visual SLAM

Part 1: Example code for how to grab Kinect observations, do multi-threading, convert range data to a 3D point cloud and render in real-time. More info here.

Part 2: Very simple demonstration of how to grab Kinect observations, perform visual feature tracking and build a 3D map (SLAM). More info here.

Kinect running a website

First Kinect compatible website