[ros-users] Stereo image to 3D

Matei Ciocarlie matei at willowgarage.com
Mon Feb 7 16:44:02 UTC 2011


Hi Dejan,

Not sure if this will help, but I can point you to some code that
computes the 3D point cloud starting from a DisparityImage and an
associated CameraInfo. The DisparityImage should have the pixels in
the same order as the left monocular image from the stereo pair.

The code snippets are below; let me know if it doesn't work for you or
if I forgot to include some function.

Matei
...
  raw_points_.clear();
  raw_points_.reserve( current_image_.width * current_image_.height);
  for (unsigned int i=0; i<current_image_.height; i++)
  {
    for (unsigned int j=0; j<current_image_.width; j++)
    {
      unsigned int adr = i * current_image_.width + j;
      float r, g, b, x, y, z;
      r = g = b = (float) current_image_.data[adr] / 255.0;

      float disparity;
      if (hasDisparityValue(current_disparity_image_, i, j))
      {
        getPoint(current_disparity_image_.image, i, j, disparity);
        ROS_ASSERT(current_disparity_image_.max_disparity != 0.0);
      }
      else
      {
        //this is for points in the image that we have no depth for;
you can also skip these
        //points altogether if that's what you prefer

        //min disparity is zero these days so can not be used
        //using max disparity causes depth data to be in front of the gripper
        //disparity = current_disparity_image_.max_disparity;
        //disparity 1.0 should be usable, and project to far far away
        disparity = 1.0;
      }
      ROS_ASSERT(disparity != 0.0);
      //note that we pass shift_correction = true
      projectTo3d(j, i, disparity,  current_disparity_image_,
current_camera_info_, x, y, z, true);
      ogre_tools::PointCloud::Point point;
      point.x = x; point.y = y; point.z = z;
      point.setColor(r,g,b);
      raw_points_.push_back(point);
    }
  }

//! Get the 3d coordinates of a point, given its image coordinates,
its disparity value, as well as the
//! images themselves which contain additional calibration information
inline void projectTo3d(float u, float v, float disparity,
                        const stereo_msgs::DisparityImage &disparity_image,
                        const sensor_msgs::CameraInfo &camera_info,
                        float &x, float &y, float &z,
                        bool shift_correction = false)
{
  if (shift_correction)
  {
      //account for the fact the center point of the image is not the
principal point
      //not sure if this is really correct or more of a hack, but
without this the image
      //ends up shifted
      u = u - (disparity_image.image.width/2  - camera_info.P[0*4+2]);
      v = v - (disparity_image.image.height/2 - camera_info.P[1*4+2]);
  }

  float fx = camera_info.P[0*4+0];
  float fy = camera_info.P[1*4+1];
  float cx = camera_info.P[0*4+2];
  float cy = camera_info.P[1*4+2];
  float Tx = camera_info.P[0*4+3];
  float Ty = camera_info.P[1*4+3];

  x = ( (u - cx - Tx) / fx );
  y = ( (v - cy - Ty) / fy );
  z = ( 1.0 );
  float norm = sqrt(x*x + y*y + 1);
  float depth = disparity_image.f * disparity_image.T / disparity;
  x = ( depth * x / norm );
  y = ( depth * y / norm );
  z = ( depth * z / norm );
}

//! Check if a disparity image as a valid value at given image coordinates
inline bool hasDisparityValue(const stereo_msgs::DisparityImage
&disparity_image, unsigned int h, unsigned int w)
{
  ROS_ASSERT(h<disparity_image.image.height && w<disparity_image.image.width);
  float val;
  memcpy(&val, &(disparity_image.image.data.at(
h*disparity_image.image.step + sizeof(float)*w )), sizeof(float));
  if (std::isnan(val) || std::isinf(val)) return false;
  if (val < disparity_image.min_disparity || val >
disparity_image.max_disparity) return false;
  return true;
}
//! Get the value from an image at given image coordinates as a float
inline void getPoint(const sensor_msgs::Image &image, unsigned int h,
unsigned int w, float &val)
{
  ROS_ASSERT(h<image.height && w<image.width);
  memcpy(&val, &(image.data.at( h*image.step + sizeof(float)*w )),
sizeof(float));
}



On Sat, Feb 5, 2011 at 11:47 AM, Dejan Pangercic
<dejan.pangercic at gmail.com> wrote:
> Hi,
>> On 02/05/2011 11:09 AM, Dejan Pangercic wrote:
>>> Hi there, trivial question i guess but does anyone have an example ROS code that projects a set of 2d points from a
>>> stereo camera (e.g. Pr2's narrow stereo) to a 3d pointcloud? cheers d.
>>
>> The question is a bit ill-posed or I just didn't understand it. :) Do you want to obtain the color of a pixel (2D) and
>> attach it to a 3D point? What is the "2D point from a stereo camera"? Is the target 3D cloud generated by the same
>> sensor, or a different sensor? Is calibration available?
> yeah I agree, I wrote it too much in a hurry.
> I basically detect a set of features in both left and right camera of
> PR2's narrow stereo rig (so yes calibration is available) and would
> now like to get a 3D position for every of these features.
>
> cheers, D.
>
>
>>
>>
>> Cheers,
>> Radu.
>> --
>> http://pointclouds.org
>> _______________________________________________
>> ros-users mailing list
>> ros-users at code.ros.org
>> https://code.ros.org/mailman/listinfo/ros-users
>>
>
>
>
> --
> MSc. Dejan Pangercic
> PhD Student/Researcher
> Intelligent Autonomous Systems Group
> Technische Universität München
> Telephone: +49 (89) 289-26908
> E-Mail: dejan.pangercic at cs.tum.edu
> WWW: http://ias.cs.tum.edu/people/pangercic
> _______________________________________________
> ros-users mailing list
> ros-users at code.ros.org
> https://code.ros.org/mailman/listinfo/ros-users
>



-- 
Matei Ciocarlie
Research Scientist
Willow Garage Inc.
650-475-9780



More information about the ros-users mailing list