[ros-users] point cloud questions
michael at styer.net
Fri Jul 30 00:55:30 UTC 2010
Thanks a lot for the detailed response -- it was extraordinarily
helpful. I have a couple of minor follow-up questions/comments, but
you've basically answered all the questions I had (for today :) ).
On Thu, 2010-07-29 at 16:26 -0700, Radu Bogdan Rusu wrote:
> > * a pcl::PointCloud object can be obtained from a PointCloud2 message
> > using the point_cloud::fromMsg() function;
> this got changed in 0.2.x (small rename), but yes, the principle is the same. sensor_msgs::PointCloud2 ->
> pcl::fromROSMsg () -> pcl::PointCloud<T>
Got it. I see the new function now; I found the old one in the
point_cloud namespace and didn't look in pcl for another. I guess the
point_cloud::fromMsg() function will be removed at some point?
> > 2) What's the function of the height and width fields in the
> > pcl::PointCloud<T> class? In particular, in my node I want to create an
> > output point cloud which is a subset of the points in the input point
> > cloud; given the subset, what should the height and width of the output
> > point cloud be? Do the height and width of the input point cloud refer
> > to the geometric area enclosing the points, or to the size of the point
> > cloud vector/matrix?
> Good question. We should document this a little bit better (though it is documented in the format message itself). The
> new sensor_msgs::PointCloud2 can represent organized datasets, while the older format (sensor_msgs::PointCloud)
> couldn't. This means that now we can represent stereo 3D point cloud data, (or TOF, or even laser, etc), by specifying a
> width and a height which corresponds to an image. What does this mean?
> - Say you have an RGB image organized as width x height, given by some sensor (i.e., stereo camera)
> - Say you have a point cloud acquired using the same stereo camera, organized as width x height
> - Now you have a perfect 1-1 correspondence between the RGB texture at each pixel and each point in the cloud! Hooray!
OK -- I see the rationale for it now. So if I filter a pointcloud,
removing some subset of the points, I've broken the organizational
structure and I should probably set width=1 and height=points.size(),
regardless of what the original width and height were.
It doesn't look like there's a structured accessor for organized point
clouds, i.e. a way to index into the points array as a 2-d matrix rather
than a vector -- is that right, or have I just missed it?
> (between you and me, we're working on a 2D to 3D feature estimation architecture as we speak - one that would allow you
> to estimate any 2D feature and attach it to a 3D point in a PointCloud2).
Sounds cool, look forward to seeing it.
> > 3) For a given node that publishes a sensor_msgs::PointCloud2 topic, is
> > there any way for another node to determine the type of the enclosed
> > points? Or is that information only visible in the source?
> It is. That's the reason why we have "fields" inside the PointCloud2 topic. take a look at sensor_msgs::PointField.
Aha. I see what you mean. So that at least tells you what the datatype
and name of each field is. What I was really wondering, though, was
whether there might be a way to figure out, for a given PointCloud2
topic, what the class (such as PointXYZ) of the contained points is,
e.g. through rostopic info or something similar. That would be
convenient if someone wanted to write a node to subscribe to a topic
published by a node from some other package. It doesn't seem like that
exists now; maybe something for the roadmap.
In any case, thanks very much again for your help. PCL is significantly
clearer to me now than it was this morning.
More information about the ros-users