Greetings, I am trying to build a map of a fairly large (tens of meters) indoor area using slam_gmapping. My robot uses the StarGazer localization system from Hagiosonic, which provides an absolute pose with small error bounds. The robot is also equipped with a Hokuyo URG-04LX-UG01 ladar, which has a range of only 4 meters and sizeable error on its range readings. This situation is opposite to the assumption made by gmapping, which expects large odometry error and small laser error. I have tried tweaking various parameters, but every map I've tried to collect looks terrible. The trouble seems to be that there are parts of the room from which walls cannot be seen with such short range. The laser does pick up many furniture legs, but scan-matching of these between iterations introduces a lot of rotational error. I perviously mapped the space just fine using a SICK and the StarGazer (with minimal tweaking of parameters). What I need to know: In principle, what are the right parameters to be looking at? Some of them are not extremely well documented on the ROS site (http://www.ros.org/wiki/gmapping). The site that page points to (http://openslam.org/gmapping.html) is at this point largely useless, as the links are either broken or point back to the same page. So at this point, it seems the only decent documentation is the academic paper. This is a shame because many people would be unwilling to digest it (nor should they have to become a SLAM expert just to build a decent map). What I have tried: I have set the parameters srr, srt, stt, and str to zero or near-zero, to reflect the fact that the minimal amount of odometry error is not cumulative. I would like the SLAM software to rely strongly on the localization data and only very weakly on the ladar scan-matching results. I therefore increased the parameters sigma and lsigma (which I presume to be standard deviations related to scan-matching). These adjustments actually made the map much worse (it turned a complete 180 degrees so I saw the same features at both ends of the mapped room). My transforms are as follows: base_laser->base_link (static) base_link->odom (provided by the a Kalman Filter that fuses StarGazer and odometry data) I performed the following experiment to verify the quality of the StarGazer fused odometry. If I broadcast a static null-transform map->odom, then I can watch in rviz as the robot smoothly drives across the screen. Turning on a long Decay Time, I see that the laser readings are as consistent while driving as they are while standing still. If, on the other hand, I run slam_gmapping and let it broadcast the map->odom transform, then driving the robot causes it to jump around quite a bit in rviz. So I'm fairly sure that gmapping is relying too heavily on scan-matching instead of localization. At a high level, this seems like a silly problem because what I'm trying to do is much easier than the hard problem that slam_gmapping is solving. Yet it has caused me a lot of trouble. Is there a better tool for the job? Any pointers and tips would be appreciated! -ross