[ros-users] Creating a hardware device

Stuart Glaser sglaser at willowgarage.com
Mon Aug 2 17:53:19 UTC 2010


Hi Adam,

An interface that we use here for controlling the arms is the
JointTrajectoryAction.  Here's an tutorial that describes how to send
one:

http://www.ros.org/wiki/pr2/Tutorials/Moving%20the%20arm%20using%20the%20Joint%20Trajectory%20Action

-Stu

On Sun, Aug 1, 2010 at 11:29 PM, Adrian <adrian.kaehler at gmail.com> wrote:
>
>  Morgan,
>
>  Thanks, that is quite helpful.  The point of my question was that I want to
> do this in a way which would make the arm as easy as possible for existing
> ROS users like, for example, those familiar with the PR-2 software, to just
> "drop in" my hand/arm.  So I would not want to go create a buch of new
> messages and services and such if I could help it, I would like to figure
> out what level of abstraction is the correct place for my code to meet up
> with existing code.  Is the "joint space controler" the place to do this?
>   If so, what exactly is the "joint space controler"?  From what I read in
> the wiki, it looks like controlers are not exactly nodes, but some kind of
> module which is loaded into some general real-time layer which handles all
> controlers on the robot.  Is that correct? (I am learning about ROS as I
> start this, so I appreciate your patience with what probably look like dumb
> questions.)
>   Really, what I am getting at is, if someone walked up to someone and said
> "hi, I made a cool hand, want to use it in your research?", what would be
> the best case for them, as ROS users, and possibly PR-2 owners, in terms of
> code I could have implemented to make the hand easy to use?
>   I gather that the Shadow Hand folks had this exact situation, what did
> they do?
>
> -=A
>
> ps. Nice to hear from you Morgan!
>
> On Sun, Aug 1, 2010 at 11:10 PM, Morgan Quigley <mquigley at cs.stanford.edu>
> wrote:
>>
>> Hi Adrian,
>>
>> I'm sure others will chime in here, but IMHO the greatest thing about
>> ROS is that you can do whatever you want :)   it's designed from the
>> ground up to be flexible and extensible, and to not favor one approach
>> over another equally-valid approach.
>>
>> That being said, when I've built my own sensors and arms, if the
>> bandwidth isn't prohibitive, I first wrap up the hardware details in a
>> ROS node that performs whatever bit-mashing is necessary to publish
>> and subscribe to abstracted versions of what the hardware deals with,
>> e.g., translating to/from some packet format that contains sensor
>> readings and actuator commands that goes over usb/ethernet/etc. to an
>> embedded microcontroller. This node doesn't do anything beyond munging
>> the embedded protocol into ROS messages, and maybe do graceful
>> startup/shutdown of the hardware. When using the real hardware, this
>> program is started once during a session and left up for a few hours
>> while hacking higher layers.
>>
>> Once the driver wrapper is done, you can write higher layers that
>> either implement your own stuff or interface to other ROS software
>> (e.g., rviz, robot state publisher, nav stack, etc. etc.). Since those
>> higher layers aren't tied to hardware, you can use rosbag to
>> record/play real data to the rest of the stack, and/or write a
>> simulation layer to swap with the "real" driver wrapper.
>>
>> we just did a master-slave "puppeteer" demo of our arm that had a
>> straightforward stack:
>>
>> * human motion capture
>> * inverse kinematics via OROCOS/KDL
>> * robot state publisher (forward kinematics)
>> * joint-space controller
>> * state estimation from encoders, accelerometers, etc.
>> * device driver wrappers
>>
>> It's all available in an unvarnished state here:
>> https://stanford-ros-pkg.googlecode.com/svn/trunk/openarms/
>>
>> each layer is a separate node. All of them talk to rviz in one way or
>> another to ease debugging. The robot state publisher (from the WG
>> robot_model stack) converts joint angles and the robot's kinematic
>> description (URDF) into tf frames. When doing any of this stuff, rviz
>> is extremely helpful when converting a new robot's cad models into
>> URDF and in general for making sense of all the coordinate systems.
>>
>> HTH,
>> -mq
>>
>>
>> On Sun, Aug 1, 2010 at 8:34 PM, Adrian <adrian.kaehler at gmail.com> wrote:
>> >
>> >   I am interested to know how one goes about creating an interface for
>> > a device, such as an arm or hand.  In particular, the question is what
>> > is
>> > the lowest level component in the ROS architecture which one would
>> > implement
>> > first?  Presumably, once that low level element was in place, one could
>> > then
>> > go on to create services or actions that make use of that component.
>> >   Could someone point me to (or write?) a summary of what this stack
>> > looks
>> > like for something like the PR2 or Shadow arms?
>> >
>> >
>> > _______________________________________________
>> > ros-users mailing list
>> > ros-users at code.ros.org
>> > https://code.ros.org/mailman/listinfo/ros-users
>> >
>> >
>> _______________________________________________
>> ros-users mailing list
>> ros-users at code.ros.org
>> https://code.ros.org/mailman/listinfo/ros-users
>
>
> _______________________________________________
> ros-users mailing list
> ros-users at code.ros.org
> https://code.ros.org/mailman/listinfo/ros-users
>
>



-- 
Stuart Glaser
sglaser -at- willowgarage -dot- com
www.willowgarage.com



More information about the ros-users mailing list