[ros-users] [Discourse.ros.org] [Quality Assurance] Input validation as a metric for quality
ros.discourse at gmail.com
Tue Jan 30 01:13:00 UTC 2018
Having an DBC sounds the best way forward, however, for a lot of languages, libraries/standard don't exist to implement packages in such a fashion. I think with ROS2, we'll be seeing packages in a number of different languages (at least Go, Rust, and C apart from Python, C++ and Lisp).
As such, I have a few open ended questions. How can we
* score a given implementation (say on a scale of 0 being no contract to 5 being a contract with no assumptions left unchecked).
* handle trade-offs: Lots of package owners might face the issue of increase in latency in order to increase their score from say 4 to 5. As a result, we might want to categorize types of assumptions so the highest priority ones affect score more than one with a low priority.
* create guidelines regarding what a contract must have or better (next point)
* create a general guideline for all languages which stays applicable with/without a DBC support. This will enable a C package to adhere to the guidelines as easily as a Python package
* find out the major issues in integration between packages as a starting point for the previous points (some issues faced by me have been mentioned in passing but I'm by no means a good representation of ROS community)
Since DBC implementations are immature, and we should start somewhere so as to enable simpler on-boarding of packages to DBC paradigm. I like the idea of using Python hypothesis to create simple test nodes for other packages to run and test against.
*I think* for all major ROS packages with a message, we should add Python nodes to read and send messages (pre-defined types) to test the assumptions for that message. This helps people test if their system is resilient to wrong inputs by writing output tests themselves (following examples is easier than doing it from scratch).
Maybe make this a standard practices so people can call relevant functions from the message package to test the assumptions themselves for their messages. (This might result in the same code being written multiple times in multiple language or writing the code in C for inter-operability)
This also enables a debug launch method where people can launch these nodes to check if their packages are sending the correct data during normal run-time operations or run these nodes against a rosbag.
[Visit Topic](https://discourse.ros.org/t/input-validation-as-a-metric-for-quality/3732/5) or reply to this email to respond.
More information about the ros-users