On Thu, Dec 2, 2010 at 11:56 PM, Daniel Stonier wrote: > I feel a bit newbish asking this because I'm a robotics trained > programmer than a university trained one, so no real experience with > networked programming apart from opening and closing the odd posix > socket, hopefully you won't hold it against me ;) > > Why exactly did RoS build its own, fairly extensive communications > system? > > I love the fact that it > > 1) Abstracts data types in simple text files. > 2) Mostly handles serialisation in a cross platform way. > 3) Framework allows for compile time validation of callbacks (no void > pointers for data). > 4) No bottleneck at the server. > > However, was there nothing that suited ros requirements previously? > I'm guessing no since ros usually avoids reinvention of the wheel. > Just seems odd given that it appears to be such a fundamental > requirement for alot of projects. It's hard to point to one specific reason, though I wouldn't characterize it as an "extensive communication system" -- the communication is perhaps one of the simpler parts of the system. We wanted clients libraries to be easy-to-implement, so that people could write clients for other languages (and have). One wheel we didn't reinvent was XMLRPC. We chose it at the coordination layer because implementations are easy to find. I think the harder part of the design was the naming system embedded in ROS. As basic as it is, there were probably the most design debates there. Licensing was one issue -- we wanted BSD, which immediately eliminated many possible starting points. We knew we wanted something P2P and data-driven, but weren't 100% sure of the exact form. We also knew that we wanted to support research code co-existing in the system, so we carried a distributed design principle throughout the ROS design (i.e. from the packaging system all the way up to the community). We built ROS mainly as a meta-framework, so that underlying design decisions could be quickly changed. The anonymous pub/sub fell out of iterating on the concept. For serialization, Google Protocol Buffers came out not long after we had started. It's similarity to the ROS IDL made it attractive, as did some of the additional functionality, but it treated array data as binary blobs. IIRC, I argued against switching to it because (1) we had something that was working and needed to move on and (2) we would end up implementing a meta-layer on top of it to represent arrays better. As it was, we didn't really reinvent the wheel on serialization. It's basically C Struct serialization (Python's struct module and numpy read it directly). And yes, there is CORBA, but enough people immediately grimace at the mention of the name that we didn't want to place that hex on ROS. Also, see note regarding licensing. > So, to the point - we have another group (non-control, strictly > software) here who have a real need for upgrading their communications > systems. They'll also be connecting with our control boards. The above > requirements are ideally suited to what they want and ros > communications would seem to fit their need, except for the fact that > we can't realistically build the miminum for an roscore outside of > linux yet. I'm not sure what you mean by this last sentence. Windows? > Are there currently any alternatives to ros for this? I've done a bit > of testing with google protobuffs, but only to do serialisation on > large data type dumps to files. You'd still need a framework on top of > this to provide the other requirements. There are lots of alternatives. The Web world has really churned out a lot of "message queue" frameworks, there's CORBA and ICE, and I'm sure many more as pubsub is a general networking design pattern. If you're willing to mix and match, there's also messaging systems like Spread. > Also, would there be any disadvantages to using ros communications for > a purely networking project's need? We view the "R" is ROS as ironic -- we made an explicit effort to purge the 'core' ROS of all robotics code. That said, there are many pubsub systems out there. cheers, Ken