[ros-users] Design decisions behind libTF...?

Ivan Dryanovski ivan.dryanovski at gmail.com
Fri Sep 10 18:00:53 UTC 2010


Hi,

I'm sorry that I'm jumping from topic to topic. This post is about a
related issue we've had with the tf paradigm. From what I can tell, an
important feature of the tf tree structure is that it does not allow
over-constrained situations. This is enforced by not allowing two
parents to publish a transform to the same child
(http://www.ros.org/wiki/tf/FAQ#Frames_are_missing_in_my_tf_tree._Why.3F).
In my mind this restriction is too strong. For example, if we have
frames A, B, C, and D:

a) A -> B -> C -> D

and

b) A -> B <- C -> D

are two tf chains, neither of which expresses an over-constrained
situation. However, only the a) chain is allowed in the current tf
model. We have encountered two usage scenarios where model b) would
fit our application better, and I will outline them here.

SCENARIO 1.

We have a UAV with an GPS sensor. A local pose estimation system tells
us the pose of the UAV (uav_base) in some local, fixed frame, for
example, the building (local_map). We also have a static tf telling us
the position of the GPS sensor on the robot frame:

local_map -> uav_base -> GPS

Now the GPS sensor receives information about the position of the
robot in some global world frame (global_map):

global_map -> GPS.

How do we chain these together, so we find the pose of the building in
the world? Ideally, like this:

local_map -> uav_base -> GPS <-global_map

However, we cannot do that, so we have to invert the direction of the
"global_map - >GPS" tf:

local_map -> uav_base -> GPS -> global_map

Now suppose we have a ground robot, which is localized directly on the
global map. The resulting chain would then look like this:

local_map -> uav_base -> GPS -> global_map -> ground_robot_base

This looks ugly, since it doesn't follow an intuitive hierarchical
model - ideally we want the global map being the root, and sensors
being leafs in the tree. An alternative solution would be to take the
"global_map - >GPS" transform, and manually compute in our code the
required "global_map -> local_map" tf, so we can publish this:

global_map -> local_map -> uav_base -> GPS

This looks better, but in order to achieve this, we had to do manually
work our way back up the tf chain, in order to massage our data so
that tf accepts it.

SCENARIO 2:

We have a system which tracks makers and publishes their pose with
respect to the camera (http://www.ros.org/wiki/ar_pose). One usage is
to have the camera fixed in the room, and locate a moving marker:

local_map -> camera_frame -> marker_frame

Another usage would be to put the marker on a ground_robot, while the
camera is mounted on a UAV. The pose of the UAV is then determined
with respect to the ground robot:

local_map -> ground_robot_base -> marker_frame -> camera_frame -> uav_frame.

You notice that we had to invert two things to make the above tf chain
work: the "camera_frame -> marker_frame" tf direction, as well as the
"uav_frame -> camera_frame" tf. Similarly to SCENARIO 1, we are forced
to invert transforms and end up with counter-intuitive tf chains. Not
only that, but our program has to be told in advance (via a
"reverse_tf" boolean parameter) which way to publish the camera-marker
transform.

***

So in conclusion, the "single parent" restriction has led us into
situations that aren't easily handled using the current tf model.
While we can make things work by manually inverting things or setting
parameters, the solutions are not "clean". If someone has an idea of
how to better fit the scenarios described about in the "single parent"
paradigm, please, let us know.

Thanks,

Ivan Dryanovski



More information about the ros-users mailing list