Wow, looks like quite a few people are working on kinect nodes; we should probably combine efforts!  We’ve also been working on a kinect node for ROS based on Hector Martin’s driver, available at:
git://github.com/atrevor/kinect_node.git

Short youtube rviz screencap:
http://www.youtube.com/watch?v=IxRIL1izvDs

We assume that you have libfreenect from Hector Martin’s repo in the same parent directory (../libfreenect w.r.t. kinect_node):
git://git.marcansoft.com/libfreenect.git

So far, it publishes the camera image, as well as a PointCloud2 of PointXYZRGBs.  The RGB camera’s image is projected onto the range data, resulting in a color point cloud.  We calibrated the RGB camera, but haven’t yet calibrated the range camera -- since we can’t just use our normal checkerboard calibration target for this :) .  As Ivan and Stefan noted, the ranges we get are a little odd -- they definitely don't seem to be linear.  We calibrated it so the range is approximately correct at 2m for our sensor, but I agree that we really need to know how these work to make much progress. We'll probably do some testing shortly with targets at various ranges to attempt to address this.  Any input would be greatly appreciated!

On Thu, Nov 11, 2010 at 5:41 PM, Stefan Kohlbrecher <stefan.kohlbrecher@googlemail.com> wrote:
> 1. Is the depth image really 640x480, or is that oversampled? The
> wikipedia page states the depth sensor has an resolution of 320×240
> pixels. If it's oversampled, where does that take place - in the
> driver, or the device itself? I prefer not inflating the point cloud
> with oversampled data
I think the device itself reports the data with this size. If you look
at the picture I posted in the second post you also see that there are
for example one pixel sized holes in the 640x480 sized depth image,
which should not exist if some very simple interpolation scheme would
be used to blow up a 320x240 image to 640x480.
>From what I read beforehand, the original Project Natal was supposed
to be 640x480, then Microsoft reportedly "downgraded" to 320x240 for
cost reasons (see
http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=kinect+downgraded+to+320x240).
Now the sensor appears to deliver 640x480 again, which might or might
not be just blown up 320x240on the onboard ASIC.

> 2. What is the relationship between the values in the depth_frame and
> the distance in meters? It doesn't appear to be linear
That´s really the interesting question, along with others like how to
calibrate visual and depth image to get real RGB-D data. With the
current state of affairs one can generate some impressive looking
images, but to leverage the full potential of the sensor these
calibration questions really have to be solved.

> 3. I read somewhere the device's range can be set dynamically. I'm
> guessing one of the inits in inits.c could be responsible for the
> range.
That´s more stuff that will probably be discovered in the coming
days/weeks. Still very impressive how good the sensor works already
right out of the box.

> _______________________________________________
> ros-users mailing list
> ros-users@code.ros.org
> https://code.ros.org/mailman/listinfo/ros-users
>
_______________________________________________
ros-users mailing list
ros-users@code.ros.org
https://code.ros.org/mailman/listinfo/ros-users