Notice: MediaWiki has been updated. Report any rough edges to marcan@marcan.st
FAQ: Difference between revisions
(→Technical: Is it possible to capture the RGB and the IR data at the same time? - from the mailing list!) |
(→Is audio available through libfreenect?: and this disussion: link to http://groups.google.com/group/openkinect/browse_frm/thread/47b863acf2b26811#) |
||
Line 63: | Line 63: | ||
=== Is audio available through libfreenect? === | === Is audio available through libfreenect? === | ||
* More USB data analysis is required to access the audio interface. See this [http://openkinect.org/wiki/Protocol_Documentation#NUI_Audio section] | * More USB data analysis is required to access the audio interface. See this [http://openkinect.org/wiki/Protocol_Documentation#NUI_Audio section] and this [http://groups.google.com/group/openkinect/browse_frm/thread/47b863acf2b26811# discussion] | ||
* At the moment, no driver provides sound for the Kinect | * At the moment, no driver provides sound for the Kinect | ||
Revision as of 05:49, 3 March 2011
General
What is the current development status of the OpenKinect project?
- The project which started in early November 2010 is in a prerelease, early phase sensor development status (February 2011). Yet libfreenect works on linux, OSX and Windows and you have many options for installing!
How should I ask for help in the irc channel or on the mailing list?
In order to get an appropriate response to your support questions, please include the following information, if relevant:
- What platform are you using, what version, and is it a 32 or 64 bit OS? ex. Windows (XP, Win7 64 bit), OSX 10.6.5, Linux (Ubuntu 10.10)
- What hardware are you using? ex. a desktop, a laptop, what cpu, graphics card, connected usb devices etc.
- If compiling from source, where did you download it from, which branch is it and are you using the latest? are you talking about libfreenect?
- What IDE/compiler are you using and which version? Visual Studio 2010-2008-2005, CodeBlocks, MingW etc.
- What is the exact error message? Please give the full output, using pastebin.com for instance and include the link with your question
- Are you following the instructions on the wiki or some tutorial and if so which one?
- What is the context in which the error appears: is it when you compile the source code, when you run glview, or when you use some wrapper?
Taking an extra minute to document your issue will go a long way in getting a response rather than a further question!
Can I help translate a specific page on the OpenKinect Wiki to my language?
- Please read the Language Policy
- First check the English version of the page and take note of the page name ex. Main_Page
- Use the wiki search function to search for the page name and append /Language_Code ex. Main_Page/fr to see if the French version of the page exists. See the Languages Template for the language codes. If the page doesn't exist, you will be offered the possibility to create it
- Create the subpage, translate the contents and add the {{Languages|English_Parent_Page_Name}} tag to the top of that page in order to display the language "bar" - so in our example we would use {{Languages|Main_Page}} on our French subpage
- On the parent (English) page, add the {{Languages}} tag - so we would add this to the Main_Page code in our example
- No further configuration is required; languages are displayed according to availability and the aforementioned page naming conventions
Technical
Why are there black shadows in the depth image?
Can optical polarization (or other tricks) be used to combine two Kinect sensors viewing the same scene from different angles?
Unfortunately, polarization is not generally preserved when reflecting off an object. Special surfaces can be used that preserve polarization, but the concept of tagging each Kinect's IR field by polarizing the light does not appear to be feasible. Initial tests using two simultaneous Kinect sensors suggest that they may not severely interfere with one another.
What is the frame size/bitrate of the rgb/depth/IR stream contained in the isochronous usb tranfers etc.?
- There are 242 packets for one frame for the depth camera (including 0x71 and 0x75 packets). All packets are 1760 bytes except 0x75 packet - 1144 bytes. Minus headers it gives 422400 bytes of data * 30 frames per second = 12672000 bytes/sec
- There are 162 packets for one frame for the color camera (including 0x81 and 0x85 packets). All packets are 1920 bytes (the isochronous packets arrive in two 960-byte chunks, sometimes in reverse order, which sum up to one 1920-byte packet) except 0x85 packet - 24 bytes. Minus header it gives 307200 bytes of data * 30 frames per second = 9216000 bytes/sec
- The depth camera returns values with 11-bits of precision
- The frame output of the RGB camera is a 640x480 Bayer pattern
- The frame output of the depth camera is 640x480 (the rightmost 8 columns are always "no data", so you get an effective potential image size of 632x480 in a 640x480 buffer)
- The frame output for the IR stream is 640x488
- When the Kinect can't see the ir reflection or has no depth data for a pixel, it returns 2047 for the depth value
See the Protocol Documentation, /include/libfreenect.h and other documents of the Knowledge base for more information.
What is the wavelength of the laser illuminator and what about the dot patten used?
- The illuminator uses an 830nm laser diode. There is no modulation - output level is constant. Output power measured at the illuminator output is around 60mW (Using Coherent Lasercheck). The laser is temperature stabilised with a small peltier element mounted between the illuminator and the aluminium mounting plate. See the Hardware Info section and this thread for more info and discussion.
- The IR emitter projects an irregular pattern of IR dots of varying intensities. See the Imaging Information section and these images of the speckle pattern.
Does libfreenect have any skeleton tracking feature?
- Skeleton tracking is higher-level than drivers and libfreenect is basically a low-level driver within OpenKinect. The raw data is made available and a skeleton-tracking solution that takes data from libfreenect can be built. The project Roadmap calls for further developments as the focus should change at some point from low-level driver and API to higher level abstractions
Is audio available through libfreenect?
- More USB data analysis is required to access the audio interface. See this section and this discussion
- At the moment, no driver provides sound for the Kinect
Is it possible to use the Kinect as a standard webcam?
- If you're using linux, you could compile and use the gspca_kinect driver
- It seems no alternative is available for windows users at the moment
Is it possible to capture the RGB and the IR data at the same time?
No, it is completely infeasible with the current firmware. The RGB and IR data are only available as different settings for the same isochronous stream, so you can have one or the other at a given time, but not both. You can, however, stream the RGB and depth data at the same time - the depth data has its own isochronous stream.
Issues
I'm trying to troubleshoot a stability or performance issue, do you have any advice?
Depending on the platform and hardware, many factors may come into play. But in general, one should look into these:
- is libfreenect properly compiled and installed from the latest source
- usb bandwidth issues: are there too many devices sharing the usb port, has the Kinect been tried on a different port, is it connected directly or through a usb hub? Is there data loss on the usb cable? Is the full usb 2.0 bandwidth enabled on the computer? On a laptop one might add an external usb mouse to the setup to see if that changes anything
- graphics card issues: some graphics card's driver may exhibit a particular behavior; trying a different one may help
Tip: in glview.c or other straight c project file, try freenect_set_log_level(f_ctx, FREENECT_LOG_SPEW); (as opposed to FREENECT_LOG_DEBUG) to enable a more verbose output for debugging purposes.
Under Mac OSX I get a "Isochronous transfer error: 1" warning with packet loss and the stream eventually stops?
First make sure you've looked at the general troubleshooting advice. This may also be related to a isochronous transfer frame scheduling problem and there may be a partial solution documented here (see in particular post #6) to restart isochronous transfers that die...
I have an ATI card and I'm getting a white screen instead of the depth camera image in glview with Linux or OSX?
This was reported with some ATI Radeon drivers (the x1400 and x1800 cards/chips for instance) that do not support non-power-of-two textures. See this thread for more information.
I have an ATI card and I'm getting a white screen instead of the RGB image and only 2 or 3 colors for the depth image in with Linux or OSX?
This was reported with some ATI Radeon drivers (R600 cards for instance) and relates to the glColour4f parameter in glview.c. Try this fix.
I get a 'Failed to submit isochronous transfer 0: -1' error?
This may happen with old(er) computers using USB1.1, which has nowhere near enough bandwidth to allocate for and process the 20MB/sec+ stream from the depth and rgb camera. See also Performance issues and this thread.
I get an error that the module freenect doesn't have a 'sync_get_rgb' attribute?
'RGB' functions have been obsoleted in favor of 'video' so you may have for instance an out-of-date freenect_sync.so installed globally that's taking precedence over the up-to-date one in your git builddir. In any case, the code should refer to 'sync_get_video' to reflect the changes to the API from December 2010 onward.
With Ubuntu 64bit I get the error "No rule to make target `/usr/lib/libGL.so', needed by `bin/glpclview" yet libGL.so.x.x.x is already installed?
Have you recently ugraded to a newer Ubuntu release? There has been reports that the symlink for /usr/lib/libGL.so might have been lost in the process. See this simple solution.