[ros-users] Creating a hardware device

Morgan Quigley mquigley at cs.stanford.edu
Mon Aug 2 06:10:39 UTC 2010


Hi Adrian,

I'm sure others will chime in here, but IMHO the greatest thing about
ROS is that you can do whatever you want :)   it's designed from the
ground up to be flexible and extensible, and to not favor one approach
over another equally-valid approach.

That being said, when I've built my own sensors and arms, if the
bandwidth isn't prohibitive, I first wrap up the hardware details in a
ROS node that performs whatever bit-mashing is necessary to publish
and subscribe to abstracted versions of what the hardware deals with,
e.g., translating to/from some packet format that contains sensor
readings and actuator commands that goes over usb/ethernet/etc. to an
embedded microcontroller. This node doesn't do anything beyond munging
the embedded protocol into ROS messages, and maybe do graceful
startup/shutdown of the hardware. When using the real hardware, this
program is started once during a session and left up for a few hours
while hacking higher layers.

Once the driver wrapper is done, you can write higher layers that
either implement your own stuff or interface to other ROS software
(e.g., rviz, robot state publisher, nav stack, etc. etc.). Since those
higher layers aren't tied to hardware, you can use rosbag to
record/play real data to the rest of the stack, and/or write a
simulation layer to swap with the "real" driver wrapper.

we just did a master-slave "puppeteer" demo of our arm that had a
straightforward stack:

* human motion capture
* inverse kinematics via OROCOS/KDL
* robot state publisher (forward kinematics)
* joint-space controller
* state estimation from encoders, accelerometers, etc.
* device driver wrappers

It's all available in an unvarnished state here:
https://stanford-ros-pkg.googlecode.com/svn/trunk/openarms/

each layer is a separate node. All of them talk to rviz in one way or
another to ease debugging. The robot state publisher (from the WG
robot_model stack) converts joint angles and the robot's kinematic
description (URDF) into tf frames. When doing any of this stuff, rviz
is extremely helpful when converting a new robot's cad models into
URDF and in general for making sense of all the coordinate systems.

HTH,
-mq


On Sun, Aug 1, 2010 at 8:34 PM, Adrian <adrian.kaehler at gmail.com> wrote:
>
>   I am interested to know how one goes about creating an interface for
> a device, such as an arm or hand.  In particular, the question is what is
> the lowest level component in the ROS architecture which one would implement
> first?  Presumably, once that low level element was in place, one could then
> go on to create services or actions that make use of that component.
>   Could someone point me to (or write?) a summary of what this stack looks
> like for something like the PR2 or Shadow arms?
>
>
> _______________________________________________
> ros-users mailing list
> ros-users at code.ros.org
> https://code.ros.org/mailman/listinfo/ros-users
>
>



More information about the ros-users mailing list