Now, I'm not using timers in these programs, so I cannot say what is the
influence of the 1ms wait in timer_manager.h. I have not looked at the
code in details, but it look like this wait could be advantageously
replaced by the following code:

      int remaining_time = std::max((int)((sleep_end.toSec() -
current.toSec())*1000),0);
      timers_cond_.timed_wait(lock,
boost::posix_time::milliseconds(std::min(50,remaining_time)));

Is there any reason to verify that the time is not going backward at 1kHz?


This is unfortunately not possible because of the ROS clock used in simulation.  If simulated time is going faster than normal time you end up missing timer events.  Building up a predictive model of how fast time is actually moving is possible, but not high priority.

Josh


Thanks for the feedback.

I somehow feel sorry to loose at least 15% of the CPU time per node on my embedded system for something that is only an issue for simulation systems.
What would make the most sense: separating the simulation functionality from the (pseudo) real-time system, or add a global parameter that define which type of system we are running? I would think the second possibility is better and easier to implement.  

I for one will definitely apply the above change to all my on-board system. The gain is too dramatic to ignore.

Now, a bonus question I'm going to investigate now: is there the same kind of high frequency threads in the rospy implementation?
-- 
Dr. Cedric Pradalier
http://www.asl.ethz.ch/people/cedricp