[ros-users] Navigation Stack without a Laser Sensor

David Feil-Seifer dfseifer at usc.edu
Tue Sep 27 13:08:28 UTC 2011


I have a stack, OIT, out of the usc-ros-pkgs repository which does
exactly this. By detecting a target from an overhead camera, and
utilizing an intrinsic calibration of the camera, an extrinsic
calibration of the camera to the ground, and a line map of the
environment (human generated), we are able to do exactly what you are

Documentation is rough currently, however. I am defending on Oct 19th
and then I'm moving, so I won't be able to provide much direct support
until after that.

Try checking out the OIT at:


There are some launch files in the oit_launch package, and the code
should all compile against diamondback (electric not yet tested, but
should work).

Good luck, and feel free to email me if you have problems, just
understand if I don't respond right away.


On Mon, Sep 26, 2011 at 11:21 PM, Piyush Khandelwal
<piyushk at cs.utexas.edu> wrote:
> Hello,
> I have the following setup
>  1) A differential drive robot
>  2) Overhead localization from an external system.
>  3) A static occupancy grid of the world (or some other
> representation for the location of walls and obstacles) - any dynamic
> obstacles are currently ignored.
> I would like to use the ROS Navigation Stack for autonomously
> navigating the robot around my lab, but the lack of a laser scanner
> currently prohibits me from doing so. I can replace amcl with the
> information provided by the overhead localization fairly simply, but
> nav_core is highly dependent on the laser scan. Are there any options?
> I can potentially publish the LaserScan message from the static map,
> but I am not sure if this is the correct approach.
> Thanks!
> Piyush
> _______________________________________________
> ros-users mailing list
> ros-users at code.ros.org
> https://code.ros.org/mailman/listinfo/ros-users

More information about the ros-users mailing list