I had missed this in my email, don't know where you are yet. eblearn looks cool but all these things have limits that work well on certain classes of problems and not others (that is in fact a consequence of the "no free lunch theorem"). I sort of like FLANN because it is easy to scale over clusters, easy to load local models, is inherently incremental and on a give machine it scales logarithmically in the number of objects learned. The user's job would then be not to find "good" features but features that work well with FLANN.
All and all eblearn + visiongrader seem like great tools, with a nice
and well thought API.
I will definitely look more in detail about their internals (I already
had played around with eblearn a few months ago) and use/re-use as
much as possible the existing code base.
Thanks for your interest Rodrigo. If you reuse existing parts, I guess the best for what you want to do would be to use the tensor library libidx and its GUI as a basis for image/matrix manipulations (not tied to any learning scheme) and visiongrader for performance evaluation.Pierre
_______________________________________________
ros-users mailing list
ros-users@code.ros.org
https://code.ros.org/mailman/listinfo/ros-users