On Tue, Sep 14, 2010 at 7:02 PM, Markus T. wrote: > This approach is OK for a small number of nodes, but by adding more and > more nodes it becomes inefficient. I focused on dropping messages and > looked into the source code of topic_tools/drop.cpp. This tool adds > another node between my two original nodes. The additional node still > receives every message from the source and forwards only a subset of it. > Now I have a large number of publisher- and subscriber-nodes with > individual filters, for example: > > publisher ->   filter        ->  subscriber > P_1       -> [F_A1,...,F_X1] -> [A_1,...,X_1] > ...       -> [...]           -> [...] > P_n       -> [F_An,...,F_Xn] -> [A_n,...,X_n] > > In the worst case an additional number of n*X filter-nodes must be > created, which will result in much more message (de)serialization than > needed. hi Markus, The topic_tools make use of a special ShapeShifter class, which doesn't (de)serialize messages (it doesn't even know how). Those tools just forward (perhaps a subset of) the incoming serialized messages. So you incur added latency from the extra hop, but there shouldn't be significant computational overhead. > With that being said, my question remains: > Is it possible to filter the messages before they are actually sent? > And if so, where do I have to modify ros? As Ken mentioned, this is on the wish list for ROS 1.3, but it's not clear yet whether we'll get to it. In addition to enhancements to the client libraries, it will require modifications to the ROSTCP procotol (e.g., to allow the subscriber to send a request back up the socket to the publisher to request a new message). Something that we discussed recently as an alternative to real polled topics is to write a ShapeShifter-based node that offers a GetNextMessage service. It's similar to using drop or throttle, but has the advantage that each downstream client decides its own update rate. There would be some details to handling per-client message caching, but it should be doable. brian.