2010/11/16 Herman Bruyninckx <Herman.Bruyninckx@mech.kuleuven.be>
On Tue, 16 Nov 2010, Konrad Banachowicz wrote:

2010/11/16 Herman Bruyninckx <Herman.Bruyninckx@mech.kuleuven.be<mailto:Herman.Bruyninckx@mech.kuleuven.be>>
On Tue, 16 Nov 2010, Konrad Banachowicz wrote:


2010/11/16 Herman Bruyninckx <Herman.Bruyninckx@mech.kuleuven.be<mailto:Herman.Bruyninckx@mech.kuleuven.be><mailto:Herman.Bruyninckx@mech.kuleuven.be<mailto:Herman.Bruyninckx@mech.kuleuven.be>>>


On Tue, 16 Nov 2010, Konrad Banachowicz wrote:

- mixing the creation of a TaskContext with a specific implementation of a
generic interface (trajectory generation in this case) is not a good
practice; these three things should be separated, in order to improve
modular reuse.
- more in particular, new trajectory generation algorithms are preferably
submitted to Orocos/KDL as contributions, instead of "hiding" them inside
a ROSified node, where they are very difficult to reuse in other
frameworks or stand-alone applications.
Trajectory generation reside in ROS stack because it is intended to be as much as possible compatible with trajectory generation used in ROS.

I have no problem with using ROS nodes to improve interoperability, but
functional algorithms belong in component/node-independent source trees,
for maximal reusability within whatever 'component framework'.

Yes  you are right but look how image processing code is developed in ROS.
It begin as independent nodes/packages and over time move into OpenCV.

I am not so convinced that is approach is good practice.

Inclusion of these trajectory generation code is a way to go for me.
But from my point of view there is reason against including it directly in KDL.
Waiting on inclusion of my patches, release of KDL, update of KDL in ROS would stop my development for a while.
I think that things can be done in parallel, i can develope my components with embedded trajectory generation and work on inclusion in KDL.
When new KDL would be released i will swich may nodes to new implementation.

- I do not see much Configuration options, while this subject of trajectory
generation, servoing etc lends itself extremely well to fine tuning and
customization via setting of configuration properties.
I have not seen any parameters that can by configured in trajectory
generation algorithm which i implemented.

Strange. Typically, I expect parameters such as maximum speeds, minimal
time steps, required tolerance, etc.

Yes but these parameters is specified for every trajectory point independently through JointTrajectoryPoint message.
http://www.ros.org/doc/api/trajectory_msgs/html/msg/JointTrajectoryPoint.html


I find it better practice to separate the Communication from the
Configuration: the most basic, reusable approach is to provide a
configuration API, and only then wrap this into a message-based communication.
I don't understand how it would be implemented.
Can you point my to some code implementing this approach ?

Most of the KDL code is being designed in this way: first virtual
interfaces (generic to a full family of kinematic chains), then various
concrete implementations, each with its own configuration parameters, and
only then embedding into a component model (be it RTT TaskContexts, or ROS
nodes) with asynchronous message types.

Herman

I would like to discuss more design of orocos robot controller than internal design of individual components.
I think we should start from communication between hardware/sorvo component and trajectory generation.
I created oro_servo_msgs for this purpose but i developend it with IRP6 manipulators in mind.
I imagine that it could be common way of communication with any modern manipulator like KUKA LWR.