[ros-users] [Discourse.ros.org] [ROS Projects] Autonomous rail-guided robot with manipulator

Cees Trouwborst ros.discourse at gmail.com
Tue Dec 13 10:56:52 UTC 2016




Hi ROS community! 

I'd like to share a project I'm currently working on for my master thesis, for two reasons: (1) I think it is a very nice project that can be a nice demonstration of how the ROS ecosystem can accelerate the development of autonomous systems and (2) I am using a large number of packages and tools that are out there, and would really like to have a spot to brainstorm with fellow ROS enthusiasts about how to integrate everything in an elegant way. Of course, if I have specific questions, I'll ask them on ROS Answers, but this seems a better spot for open discussion.

On the project: I am working on increasing the autonomy of a (prototype) inspection platform. The video below shows an older version of the platform, but the idea stayed pretty much the same for the new version.

https://www.youtube.com/watch?v=ugjsbcns-JA

My focus will be on increasing the autonomy; practically this means (amongst other things) integrating the different subsystems we have and implementing planning.

I'll be posting some of my ideas/struggles here regularly (iff I have the feeling they are read, of course), and I'd love to hear your input and ideas! If you like the comfort of a digital pub for this kind of dicussion, I frequent the #ros and #moveit IRC channels on FreeNode and am always up for a good discussion about your project, or mine. :wink:

----------

## Simulations of a rail-guided robot
_I'd like to add the tags Gazebo, MoveIt!, SDF and URDF, but it seems I cannot add tags yet. This has to do something with my trust level, I'll do so later._

To start off with sharing one of my recent struggles in designing the system: I would like to be able to run simulations of my robotic platform, to be able to test planning capabilities in a later stage. This simulation does not have to give insight in system dynamics like the inertias of the links and the interaction between the drive units and the rail; the main requirement is that I can have a virtual environment to drive around in, and observe using a virtual camera (just as in the rail system). This camera will be used to detect Points of Interest that need inspection by the manipulator. 

Some important assumptions / requirements:

1. In this phase: Assume the environment is static and known. So: we know the shape and location of the rail w.r.t. the different metal parts in the environment.
* In this phase: Assume the position of the robot in the environment (so: it's location on the rail is known)
* It would be nice if the robot manipulator in the simulation is able to collide with (so: cannot go through) certain other parts, like the steel parts in the environment, the rail and other parts of the robotic platform.
* Of course, the arm should actively prevent hitting stuff.
* I should be able to deduce from a single 2D camera image where a Point of Interest is in 3D (raycasting).

What I came up with for a design:

* Use RVIZ for visualizations of the knowledge of the robot. Use Gazebo for the simulation.
* The only difference between the info in RVIZ and in Gazebo is that initially, RVIZ/the robot does not know about the points of interest.
* Use MoveIt! for arm control.
* Since dynamics are not of interest: publish the pose of the robot model from ROS to Gazebo. Don't really model the platform going over the rails, just set its position on top of the rails. This prevents me from having to model wheels, interaction between wheels and the rails, etc.

So, now things get interesting.

* In order to allow some collisions (drive unit with rails) in the Gazebo simulation, I need to use the [collide bitmask](http://sdformat.org/spec?ver=1.6&elem=collision#contact_collide_bitmask). This is part of the SDF spec, and not present in URDF.
* To move my arm in simulation, I want to use [ros_control](http://gazebosim.org/tutorials/?tut=ros_control). This means I need to use a URDF besides the SDF. This also means my manipulator model cannot be static.
* In order to use MoveIt! for collision detection and planning, the static things in the environment need to be in my Planning Scene. Since I also want collision detection with the other parts (drive units, etc.) of my robot, they should all be part of one large URDF, with some kinds of joints between the different parts of the platform. (The alternative would be to add the other parts as collision objects and update their positions in the Planning Scene manually, but that is not an elegant solution at all.)
* If I want to set the pose of the simulated robot through ROS now, it actually means that I can only set the pose of the full robot model and will have to apply control the joints between the different robot parts. I do *not* like that, since this is not actually an actuated joint. I would like to just publish the pose/tf of the different parts/links.
* Back to the planning scene: in the end, I need to perform a raycast to get a 3D location from my camera picture. To be able to do raycasting in the planning scene, I think I need to use octomap. However: adding the fixed environment (rail and tank) is done through adding Collision Objects, since I can use the exact same STLs for my world object in Gazebo. (Of course, I can convert some of my STLs to an octomap and use that one in simulation. I actually tried that but ran into some bugs in the implementation: the octomap was always visualized with its origin equal to the robot base_link frame, not the world frame, whatever I tried. I need more time to actually verify/fix this.)

So, plenty of food for thought for me. I would love it if I could use just one robot description file, and only one representation of my static environment, but I am not sure if that is possible. 

Any input on these matters is definitely appreciated!






---
[Visit Topic](https://discourse.ros.org/t/autonomous-rail-guided-robot-with-manipulator/966/1) or reply to this email to respond.




More information about the ros-users mailing list