Re: [ros-users] nimbro_network: Multi-master ROS network sol…

Top Page
Attachments:
Message as email
+ (text/plain)
Delete this message
Reply to this message
Author: Max Schwarz
Date:  
To: Mike Purvis
CC: User discussions
Subject: Re: [ros-users] nimbro_network: Multi-master ROS network solution
Hi Mike,

thanks for your interest. I'll take a quick shot at answering your questions
here and will try work the answers into the documentation.

Am Dienstag, 22. September 2015, 10:38:29 schrieb Mike Purvis:
> Looks awesome. A few quick questions, just to help everyone understand how
> the capabilities of this system relate to Concert and fkie_multimaster:

Good point. I'll add a comparison section to the README.
The main point is that nimbro_network focusses on handling constrained/lossy
connections.

For example, during the DLR SpaceBot Cup we had 2s communication latency in
each direction - do any form of handshaking / discovery and you are stuck
waiting forever.

As far as I'm aware, nimbro_network is also the only ROS network transport
offering forward error correction to handle packet drops without
retransmissions.

The drawback in comparison to concert / multimaster_fkie is that there are no
higher-level capabilities like automatic discovery, process monitoring or job
scheduling.
nimbro_network provides a robust network transport of ROS messages and
services, nothing more.

>    - Do you handle prefixing/deprefixing TF frames for headers?
>       - For fields like Odometry::child_frame_id?


No, tf is copied across as it is. Also, our system has no message
introspection capabilities, so changing fields inside a message would be
difficult. Our policy so far has been to avoid TF name clashes across all
systems. This is certainly annoying in multi-robot scenarios, but I haven't
seen a nice, completely robust solution yet.

>    - When you rate-limit topics like tf and joint_state, do you aggregate
>    "full" snapshots, or is it a message-level throttle?


The basic rate limit configured in nimbro_topic_transport is message-level.
For tf, there is the special tf_throttle package for snapshotting the whole tf
tree with configurable rate.

>    - Do you handle machine discovery?
>       - If so, is it using avahi or some other mechanism?


No, everything is statically configured. You probably don't want discovery to
fail due to bad WiFi in some competition/demo ;-)

>    - How dependent are you on the network's DNS infrastructure?


If you specify a host name as target address in the sender configuration, we
do a DNS lookup. If you give an IP address, we don't.
So DNS is entirely optional.

>    - How much provision has been made for Gazebo simulation of multi-robot
>    scenarios?


So far, we simulated only one robot at a time (using separate roscores for
robot and teleoperation software).

>       - If any, does the user create multiple local roscores on different
>       ports, or do you simulate all on the same roscore as Gazebo?


I'd definitely advise you to use different roscores. Otherwise the topics
subscribed/published by the sender/receiver nodes will clash, unless you remap
each topic.

>       - Can I simulate lossy connection between my simulated bots?


Yes, it is possible to start the transport between roscores running on the
same machine. There is no inbuilt functionality for dropping packets or
introducing delay, though.

If you want that, you can use a linux loopback device and the "tc" utility to
make the connection "lossy" at the OS level. That's probably better than any
built-in solution, since they implement a lot of loss/failure modes you can
use.

tc netem documentation:
http://www.linuxfoundation.org/collaborate/workgroups/networking/netem

> It's possible some of this is covered in your docs, but I didn't see it on
> a quick first pass.


I'm happy to answer any questions and extend the docs that way!

Cheers,
Max
_______________________________________________
ros-users mailing list

http://lists.ros.org/mailman/listinfo/ros-users