[ros-users] Using Bumblebee Xb3 with image_pipeline

Kurt Konolige konolige at willowgarage.com
Wed Sep 22 02:10:27 UTC 2010


Unfortunately, the distortion model for the Bumblebee is proprietary, 
and uses a very large data structure (so it must be doing a local 
algorithm).  So you would want to write a custom image pipeline node 
that performs rectification.  You can't put the BB distortion data 
structure into a CameraInfo message.

Cheers --Kurt

On 9/21/2010 4:37 PM, Patrick Mihelich wrote:
> Hi Paul,
>
> I'm opening this topic to the list as I know there are some other
> Bumblebee users out there. My comments are inline below.
>
> By the way, there is another community-contributed Bumblebee driver that
> may be useful to you, see the bumblebee2
> <http://www.ros.org/wiki/bumblebee2> package.
>
> On Mon, Sep 20, 2010 at 5:57 PM, Paul Furgale <paul.furgale at gmail.com
> <mailto:paul.furgale at gmail.com>> wrote:
>
>     I'm currently writing a ROS interface for my Point Grey Research
>     Bumblebee Xb3 camera. The cameras come pre-calibrated and in my
>     experience the factory calibration works well. Unfortunately, they
>     don't use the 5 parameter distortion model currently supported in
>     the image_pipeline. I can use the Point Grey libraries to read their
>     calibration file and generate maps suitable for the OpenCV function
>     cv::remap(). Skimming the code of image_pipeline suggests that you
>     have something very similar implemented.
>
>
> Out of curiosity, do you know what distortion model they use? I don't
> have experience with the Bumblebee cameras, but I've browsed Point
> Grey's software docs and it looks like their SDKs perform exactly the
> same operations as the image_pipeline. It would not surprise me at all
> if their distortion parameters could be encoded into CameraInfo as-is
> (or at least closely approximated), if only their APIs exposed them.
>
>     Rather than go my own way and produce a parallel implementation of
>     PinholeCamera and the stereo processing already available in ROS, I
>     would be interested in helping shoehorn in support for custom
>     rectification/dewarping maps. The simplest solution I can think of
>     is to add the ability to set the undistort maps in the pinhole
>     camera model, and add some fields to the CameraInfo message.
>     Granted, a custom map would be necessarily large, but the CameraInfo
>     message is meant to be transmitted once during initialization (as
>     far as I can tell).
>
>
> Actually, every Image message is paired with a CameraInfo message having
> the same timestamp. So unfortunately putting custom maps in CameraInfo
> would be very expensive in bandwidth. image_geometry::PinholeCameraModel
> is optimized to only rebuild the undistort maps when the parameters
> change (e.g. in self-calibrating systems).
>
> Currently the easiest way to integrate is to just bite the bullet and
> use camera_calibration to get a camera model that the image_pipeline
> understands.
>
> Another way is to replicate the topics produced by stereo_image_proc in
> your driver node. You'd use PtGrey libs for the rectification
> (image_rect and image_rect_color topics). stereo_image_proc has some
> undocumented but fairly stable library methods
> <http://www.ros.org/doc/api/stereo_image_proc/html/classstereo__image__proc_1_1StereoProcessor.html>
> (processDisparity, processPoints2) you can use to produce the stereo
> outputs (disparity image and point cloud). The community-contributed
> videre_stereo_cam <http://www.ros.org/wiki/videre_stereo_cam> package
> has a driver for Videre stereo cameras that I believe follows this model.
>
> For Diamondback I'm planning to refactor the image_pipeline into
> nodelets. That might be interesting to your use case, as nodelets will
> give you much more flexibility in mixing and matching the image_pipeline
> operations. You'd just write your own nodelet for rectification and swap
> it in for the default one using PinholeCameraModel.
>
> The last couple options have the drawback that CameraInfo and
> PinholeCameraModel still don't understand the distortion model. For many
> (admittedly not all) vision processing nodes this will be OK, as they
> operate on rectified images and ignore distortion.
>
>     Do you have any interest in this? Is this the right venue to discuss
>     this or should I post these suggestions somewhere else?
>
>
> Yes, and the mailing list is the best place for design discussions like
> this.
>
> Cheers,
> Patrick
>
>     Thanks!
>     --
>     Paul Furgale
>     PhD Candidate
>     University of Toronto
>     Institute for Aerospace Studies
>     Autonomous Space Robotics Lab
>     ph: 647-834-2849
>     skype: paul.furgale
>     Videos of robots doing things: http://asrl.utias.utoronto.ca/~ptf/
>     <http://asrl.utias.utoronto.ca/%7Eptf/>
>
>
>
>
> _______________________________________________
> ros-users mailing list
> ros-users at code.ros.org
> https://code.ros.org/mailman/listinfo/ros-users



More information about the ros-users mailing list