NEW ACTIVE VOLUMETRIC RECONSTRUCTION CODE RELEASED
We are pleased to announce the release of a new open-source library for
performing active object reconstruction using a camera-based depth
sensor on a mobile robot. The rpg_ig_active_reconstruction framework is
available from our github:
github.com/uzh-rpg/rpg_ig_active_reconstruction
The key features are:
- General framework for reconstruction that is object, sensor, and
robot-agnostic.
- Real-time reconstruction of unknown objects that are spatially
bounded: the user only needs to know the size of the bounding volume,
and our algorithm will efficiently reconstruct its structure volumetrically.
- Modular library can be adapted to the kinematics of arbitrary mobile
robots (ground, flying, mobile manipulators, etc.), and can receive 3D
input from any dense depth sensor (e.g. stereo cameras, RGB-D sensors,
monocular dense depth estimation [such as REMODE], etc.).
- Multiple information gain formulations are implemented to guide the
choice of next best view.
- Volumetric representation allows estimation of information gain for
candidate views using both occupancy and occlusion.
- Includes ROS bindings and a simulated example in Gazebo.
- Video showing the system in action:
https://youtu.be/ZcJcsoGGqbA
Technical details of the algorithm and information gain formulations can
be found in our ICRA paper:
http://rpg.ifi.uzh.ch/docs/ICRA16_Isler.pdf
Stefan Isler, Reza Sabzevari, Jeffrey Delmerico, Davide Scaramuzza
University of Zurich, Robotics and Perception Group
_______________________________________________
ros-users mailing list
ros-users@lists.ros.org
http://lists.ros.org/mailman/listinfo/ros-users