Notice: MediaWiki has been updated. Report any rough edges to marcan@marcan.st

Imaging Information

From OpenKinect
Revision as of 03:48, 26 November 2010 by 74.88.17.159 (talk) (Color/Depth Mapping)
Jump to: navigation, search

RGB Camera

The RGB camera has a slightly larger angle of view than the Depth camera. For computer vision applications, tt can be calibrated using standard techniques, e.g from OpenCV.

Depth Camera

Lots of information on calibrating the depth camera is available on the ROS kinect_node page.

From their data, a basic first order approximation for converting the raw 11-bit disparity value to a depth value in centimeters is: 100/(-0.00307 * rawDisparity + 3.33). This approximation is approximately 10 cm off at 4 m away, and less than 2 cm off within 2.5 m. A more dense set of data and second or third order approximation could increase the accuracy maybe by an order of magnitude.

Once you have the distance using the measurement above A good approximation for converting (i, j, z) to (x,y,z) is: x = (i - w / 2) * (z + minDistance) * scaleFactor y = (j - h / 2) * (z + minDistance) * scaleFactor z = z

Where minDistance = -10 and scaleFactor = .0021. These values were found by hand.

To convert the 11-bit disparity value to an 8-bit grayscale value that is fairly linear with respect to distance: (2048 * 256) / (2048 - rawDisparity). Also, background noise can be effectively eliminated by ignoring rawDisparity values above 1023.

Color/Depth Mapping

To enable accurate mapping between depth pixels ([voxels]http://en.wikipedia.org/wiki/Voxel) and color pixels and obtain colored point clouds, the intrinsics parameters of both depth and color cameras are required (focal distances, distortion coefficients and image center), but their relative position and orientation in the world coordinate frame also need to be estimated. A preliminary attempt to extract all these parameters in a semi-automatic way is described there [1].