Notice: this wiki has been recently migrated to another host and web serving stack. Report any rough edges to marcan@marcansoft.com

FAQ

From OpenKinect
Jump to: navigation, search

Contents

General

What is the current development status of the OpenKinect project?

  • The project which started in early November 2010 is in a prerelease, early phase sensor development status (February 2011). Yet libfreenect works on linux, OSX and Windows and you have many options for installing!

How should I ask for help in the irc channel or on the mailing list?

In order to get an appropriate response to your support questions, please include the following information, if relevant:

  • What platform are you using, what version, and is it a 32 or 64 bit OS? ex. Windows (XP, Win7 64 bit), OSX 10.6.5, Linux (Ubuntu 10.10)
  • What hardware are you using? ex. a desktop, a laptop, what cpu, graphics card, connected usb devices etc.
  • If compiling from source, where did you download it from, which branch is it and are you using the latest? are you talking about libfreenect?
  • What IDE/compiler are you using and which version? Visual Studio 2010-2008-2005, CodeBlocks, MingW etc.
  • What is the exact error message? Please give the full output, using pastebin.com for instance and include the link with your question
  • Are you following the instructions on the wiki or some tutorial and if so which one?
  • What is the context in which the error appears: is it when you compile the source code, when you run glview, or when you use some wrapper?

Taking an extra minute to document your issue will go a long way in getting a response rather than a further question!

Can I help translate a specific page on the OpenKinect Wiki to my language?

  • Please read the Language Policy
  • First check the English version of the page and take note of the page name ex. Main_Page
  • Use the wiki search function to search for the page name and append /Language_Code ex. Main_Page/fr to see if the French version of the page exists. See the Languages Template for the language codes. If the page doesn't exist, you will be offered the possibility to create it
  • Create the subpage, translate the contents and add the {{Languages|English_Parent_Page_Name}} tag to the top of that page in order to display the language "bar" - so in our example we would use {{Languages|Main_Page}} on our French subpage
  • On the parent (English) page, add the {{Languages}} tag - so we would add this to the Main_Page code in our example
  • No further configuration is required; languages are displayed according to availability and the aforementioned page naming conventions


Technical

Why are there black shadows in the depth image?

Shadow explanation diagram

Can optical polarization (or other tricks) be used to combine two Kinect sensors viewing the same scene from different angles?

Unfortunately, polarization is not generally preserved when reflecting off an object. Special surfaces can be used that preserve polarization, but the concept of tagging each Kinect's IR field by polarizing the light does not appear to be feasible. Initial tests using two simultaneous Kinect sensors suggest that they may not severely interfere with one another.

What is the frame size/bitrate of the rgb/depth/IR stream contained in the isochronous usb tranfers etc.?

  • There are 242 packets for one frame for the depth camera (including 0x71 and 0x75 packets). All packets are 1760 bytes except 0x75 packet - 1144 bytes. Minus headers it gives 422400 bytes of data * 30 frames per second = 12672000 bytes/sec
  • There are 162 packets for one frame for the color camera (including 0x81 and 0x85 packets). All packets are 1920 bytes (the isochronous packets arrive in two 960-byte chunks, sometimes in reverse order, which sum up to one 1920-byte packet) except 0x85 packet - 24 bytes. Minus header it gives 307200 bytes of data * 30 frames per second = 9216000 bytes/sec
  • The depth camera returns values with 11-bits of precision
  • The frame output of the RGB camera is a 640x480 Bayer pattern
  • The frame output of the depth camera is 640x480 (the rightmost 8 columns are always "no data", so you get an effective potential image size of 632x480 in a 640x480 buffer)
  • The frame output for the IR stream is 640x488
  • When the Kinect can't see the ir reflection or has no depth data for a pixel, it returns 2047 for the depth value

See the Protocol Documentation, /include/libfreenect.h and other documents of the Knowledge base for more information.

What is the wavelength of the laser illuminator and what about the dot patten used?

  • The illuminator uses an 830nm laser diode. There is no modulation - output level is constant. Output power measured at the illuminator output is around 60mW (Using Coherent Lasercheck). The laser is temperature stabilised with a small peltier element mounted between the illuminator and the aluminium mounting plate. See the Hardware Info section and this thread for more info and discussion.
  • The IR emitter projects an irregular pattern of IR dots of varying intensities. See the Imaging Information section and these images of the speckle pattern.

Does libfreenect have any skeleton tracking feature?

  • Skeleton tracking is higher-level than drivers and libfreenect is basically a low-level driver within OpenKinect. The raw data is made available and a skeleton-tracking solution that takes data from libfreenect can be built. The project Roadmap calls for further developments as the focus should change at some point from low-level driver and API to higher level abstractions

Is audio available through libfreenect?

  • Yes, you can stream the raw synchronized microphone data from the four microphones. Take a look at the wavrecord and micview examples.

Is it possible to use the Kinect as a standard webcam?

  • If you're using Linux >= 3.0, there's an in-kernel driver for the RGB camera already.
  • If you're using Linux < 3.0, you could compile and use the gspca_kinect driver
  • It seems no alternative is available for windows users at the moment

Is it possible to capture the RGB and the IR data at the same time?

No, it is completely infeasible with the current firmware. The RGB and IR data are only available as different settings for the same isochronous stream, so you can have one or the other at a given time, but not both. You can, however, stream the RGB and depth data at the same time - the depth data has its own isochronous stream.

Issues

I'm trying to troubleshoot a stability or performance issue, do you have any advice?

Depending on the platform and hardware, many factors may come into play. But in general, one should look into these:

  • is libfreenect properly compiled and installed from the latest source
  • usb bandwidth issues: are there too many devices sharing the usb port, has the Kinect been tried on a different port, is it connected directly or through a usb hub? Is there data loss on the usb cable? Is the full usb 2.0 bandwidth enabled on the computer? On a laptop one might add an external usb mouse to the setup to see if that changes anything
  • graphics card issues: some graphics card's driver may exhibit a particular behavior; trying a different one may help

Tip: in glview.c or other straight c project file, try freenect_set_log_level(f_ctx, FREENECT_LOG_SPEW); (as opposed to FREENECT_LOG_DEBUG) to enable a more verbose output for debugging purposes.

It's worth noting that if you see error messages about packets being dropped from time to time, but the application performs acceptably, this is (probably) fine. Isochronous transfers were designed to provide realtime data, rather than be completely immune to noise or packet loss, so a little dropping is par for the course.

Under Mac OSX I get a "Isochronous transfer error: 1" warning with packet loss and the stream eventually stops?

First make sure you've looked at the general troubleshooting advice. This may also be related to a isochronous transfer frame scheduling problem and there may be a partial solution documented here (see in particular post #6) to restart isochronous transfers that die...

I have an ATI card and I'm getting a white screen instead of the depth camera image in glview with Linux or OSX?

This was reported with some ATI Radeon drivers (the x1400 and x1800 cards/chips for instance) that do not support non-power-of-two textures. See this thread for more information.

I have an ATI card and I'm getting a white screen instead of the RGB image and only 2 or 3 colors for the depth image in with Linux or OSX?

This was reported with some ATI Radeon drivers (R600 cards for instance) and relates to the glColour4f parameter in glview.c. Try this fix.

I get a 'Failed to submit isochronous transfer 0: -1' error?

This may happen with old(er) computers using USB1.1, which has nowhere near enough bandwidth to allocate for and process the 20MB/sec+ stream from the depth and rgb camera. See also Performance issues and this thread.

I get an error that the module freenect doesn't have a 'sync_get_rgb' attribute?

'RGB' functions have been obsoleted in favor of 'video' so you may have for instance an out-of-date freenect_sync.so installed globally that's taking precedence over the up-to-date one in your git builddir. In any case, the code should refer to 'sync_get_video' to reflect the changes to the API from December 2010 onward.

With Ubuntu 64bit I get the error "No rule to make target `/usr/lib/libGL.so', needed by `bin/glpclview" yet libGL.so.x.x.x is already installed?

Have you recently ugraded to a newer Ubuntu release? There has been reports that the symlink for /usr/lib/libGL.so might have been lost in the process. See this simple solution.

When I compile libfreenect in Windows with Visual Studio I get a 'LNK1104: cannot open file '..\lib\Debug\freenect_sync.lib' ' error?

  • This error doesn't mean that the solution didn't compile. Take a look at your /bin and /lib folder (in the output folder you specified in cmake) to see if anything was built (usually this error only relates to tiltdemo and glpclview) and take note the Visual Studio output indicating how many items compiled and how many failed. Right-click the solution or ALL_BUILD and select "Build" again to see if the items that didn't compile end up compiling.
  • Advanced users may look into bypassing the cmake configuration altogether and linking objects through the "Additional Library Dependencies" within the examples' projects instead of using "References" in Visual Studio.

I get the following error accessing microphones: upload_firmware: failed to find firmware file.

The Kinect audio system requires a firmware to be sent at runtime. It should be found in your installation (in share/libfreenect/audios.bin) but it might be absent. Run share/libfreenect/fwfetcher.py to download the latest audio firmware file and move it to your installation.