To continue on my fight against overhead, I've had good performance improvement of rosout by replacing all the timeout argument in the CallbackQueue::callAvailable and CallbackQueue::callOne from 0.01s to 0.1s. Now my rosout is really sitting in a corner doing nothing while I'm not logging. Can you foresee any drawbacks from this change? On 07/31/10 11:15, Cedric Pradalier wrote: > >> >> Now, I'm not using timers in these programs, so I cannot say what >> is the >> influence of the 1ms wait in timer_manager.h. I have not looked >> at the >> code in details, but it look like this wait could be advantageously >> replaced by the following code: >> >> int remaining_time = std::max((int)((sleep_end.toSec() - >> current.toSec())*1000),0); >> timers_cond_.timed_wait(lock, >> boost::posix_time::milliseconds(std::min(50,remaining_time))); >> >> Is there any reason to verify that the time is not going backward >> at 1kHz? >> >> >> >> This is unfortunately not possible because of the ROS clock used in >> simulation. If simulated time is going faster than normal time you >> end up missing timer events. Building up a predictive model of how >> fast time is actually moving is possible, but not high priority. >> >> Josh >> > > Thanks for the feedback. > > I somehow feel sorry to loose at least 15% of the CPU time per node on > my embedded system for something that is only an issue for simulation > systems. > What would make the most sense: separating the simulation > functionality from the (pseudo) real-time system, or add a global > parameter that define which type of system we are running? I would > think the second possibility is better and easier to implement. > > I for one will definitely apply the above change to all my on-board > system. The gain is too dramatic to ignore. > > Now, a bonus question I'm going to investigate now: is there the same > kind of high frequency threads in the rospy implementation? -- Dr. Cedric Pradalier http://www.asl.ethz.ch/people/cedricp