[ros-users] Single Laser dependency

Ivan Dryanovski ivan.dryanovski at gmail.com
Wed Oct 6 13:16:56 UTC 2010


Hi Prasat,

In addition to what Eric said:

> Please let me know what your experience says...
> I am working on indoor robot application, in which i am using only one
> Hokuyo URG-04LX-UG01 sensor.  I would like to know that, Is it efficient to
> use only one sensor? can i achieve Localization, Planning and Navigation
> cycle using only single sensor? How effective would be ROS algorithms in
> such case? Do i need any additional supportive sensors like IMU or some
> external Trans-receivers or Vision sensors?
>
> Though I am using Sonar sensors but they are not involved anywhere in
> Navigation process. They are just kept as precautionary measures in case
> obstacle is on some height which is above laser scans.

We have used the URG-04LX-UG01 for indoor localization and mapping
with both ground and air robots. It will give you good performance
when mapping rooms or short hallways. However, if your robot needs to
drive down long corridors, or map out bigger rooms, then the short
range of the laser will really hurt you, and your maps/localization
will most likely not be accurate.

We saw big improvement when replacing the URG-04LX-UG01 laser with the
UTM-30LX one, which has a 30m range.

I would recommend you equip your robot with a gyro, wheel encoders,
and a laser to start with. The gyro together with the wheel encoders
will provide a good odometric estimate for the movement of your robot,
which can be further refined by using scan matching on the laser data.

If you choose not to have encoders and a gyro, you can take a look at
the packages in the scan_tools stack (especially the canonical and
polar scan matchers). We have achieved good mapping results using the
stack and robots that have no odometry information.

> 2. Is it possible to change the range of the Laser scanner dynamically? Is
> there any parameter for it to set that in ROS?
> For e.g: Suppose my Robot is running in a museum. The robot is mounted with
> the touch screen on the top (like texai monitor) . A viewer can access the
> screen to get the information. Now my laser will have specific range for
> usual navigation but if user comes in its front the laser then that time it
> should reduce its range so as not to treat user as an obstacle.
> OR i can have camera to capture face and direct laser to reduce its range.

I would recommend you treat people as regular obstacles, as you don't
want to bump into them. Keep in mind that as soon as the person moves
away and the laser re-observes the free space, the corresponding area
on the map will be cleared.

If I understand your use case correctly, a simple solution would be to
attach a camera with a face detector and simply stop the robot when it
detects people in close proximity. This will allow the people to come
interact with the robot without having to chase it while it's trying
to navigate around them. Once it detects no faces nearby, the robot
can resume with its usual navigation tasks.

Hope this helps,

Ivan Dryanovski



More information about the ros-users mailing list