[ros-users] ar_pose basics
abhimohpra at gmail.com
Thu Nov 18 14:26:03 UTC 2010
Thanks Ivan and Steven
For localization-as explained earlier i have limited sensing capabilities
which somehow want to make it more effective by Visualization-camera sensor.
Also,in visuals i am avoiding stereo vision and IMU sensors due to my
limited hardware configuration.
So, as per my understanding and from your reply... I can mount a single
camera on the robot (with camera_info topic published) and place various
patterns at different locations where i know (x,y,z) of all those earlier. I
don't need ekf or probabilistic approach since i am taking single input from
webcam (Logitech pro 9000 - upto 30 fps) and know all patterns placing
positions. A particular pattern will get matched whenever robot comes across
it and ar_pose will recognize Robot's current position and update it in
Please let me know if i am wrong in this approach.
If it is right then,
1. How the robot will know the distance from the pattern?
2. Is "demo_single" right program to study?
3. I have static map and /amcl node running, how ar_pose input will get
combined to know the exact position after recognizing the pattern and it's
View this message in context: http://ros-users.122217.n3.nabble.com/ar-pose-basics-tp1921421p1924148.html
Sent from the ROS-Users mailing list archive at Nabble.com.
More information about the ros-users