Ivan, > I am a researcher at CCNY, working on implementing a navigation system on a > quadrotor UAV using ROS. > I am interested in using combined measurements from > an IMU and visual tracking data to estimate the 3D pose of the quadrotor. I > have looked into the robot_pose_ekf package, and it seems to be designed > with a similar purpose in mind. However, after briefly playing with it, it > seems that it has been built specifically for a wheeled robot, and will not > work unless we explicitly provide wheel odometry information. Is that really > the case, or am I missing something? The robot pose ekf is indeed the tool we use to estimate the 3D pose of our robot. It can process three types of input: (i) a 2D pose: x, y, yaw, (ii) a 3D rotation (roll, pitch, yaw), and (iii) a 3D pose (x, y, z, roll, pitch, yaw). We use the first two inputs all the time, so the robot pose ekf is well tested for these. With the 6D pose we only did a few experiments using visual odometry. The robot pose ekf should work with any combination of these three inputs, but as you point out correctly, the filter currently only initializes when it receives at least a few 2D poses. This is obviously not desired, so I opened a ticket . I should be able to get a fix for this problem in Boxturtle, as well as in the next release. In the mean time, as a workaround, you could send a number of 2D poses to the robot pose ekf to force it to initialize. After it is initialized, all inputs should be enabled. Wim -- -- Wim Meeussen Willow Garage Inc.