[ros-users] Visual-Inertial SLAM Sensor
Ingo Lütkebohle
iluetkeb at gmail.com
Fri Jan 13 16:58:52 UTC 2012
Janosch,
would it be interesting for you to look beyond VSLAM applications?
I can say that for social human-robot-interaction, the camera setup is
one of the more complex pieces, and requires a fair bit of processing
even for basic stuff. One needs to be able to move the camera around
fast and slow, it must not be too big (which is, today, usually
achieved by placing the sensor apart from the processing), and
integration with an inertial sensor is definitely a bonus (one can do
the vestibulo-ocular-reflex based on good joint sensors, but it's not
the ideal thing). Furthermore, most of the heads I know of have
abysmal optics (fixed focus, etc.)
To support such things, your systems would, primarily, need to be able
to cope with changes to the spatial relation of the cameras at
runtime.
cheers,
Ingo
On Fri, Jan 13, 2012 at 2:02 PM, Janosch Nikolic
<janosch.nikolic at mavt.ethz.ch> wrote:
> Dear ROS users and roboticists,
>
> We (Swiss Federal Institute of Technology, ETH) are about to develop an open
> Visual-Inertial low-cost camera system for robotics. The goal is a standard
> piece of hardware that combines a monocular/stereo/multi camera and inertial
> sensors (an IMU) providing synchronized, calibrated data in ROS.
>
> Such a piece of hardware could allow the community to focus more on the
> development of novel algorithms and less on hardware and integration issues.
> Our hope is that this will motivate young PhD students and researchers
> around the world to make their VSLAM frameworks ROS compatible, ultimately
> leading to 'plug and play VSLAM for everyone'.
>
> The system will be based on the latest generation of XILINX FPGAs
> (Artix/Kintex-7 / Zynq, that's fixed) as well as a user programmable CPU and
> provide at least a GigE port. We target real-time, low-power processing, and
> the user should have access to raw data as well as pre-processed information
> such as features and flow fields etc.
>
> We would like to invite you to give us feedback an thereby influence the
> design of our system. We are extremely happy about any feedback you can
> provide, more general feedback is as welcome as technicalities such as:
>
> - Cameras (number, configuration, resolution, HDR, global shutter, lens
> type, etc.)
> - General specs such as weight, size, power consumption, etc.
> - Inertial sensor grade
> - Interface to the host, API, etc.
> - On-camera features (feature detection, dense optical flow, dense stereo,
> etc.)
>
> Many thanks and best regards,
> ETH - CVG
> ETH - ASL
>
> _______________________________________________
> ros-users mailing list
> ros-users at code.ros.org
> https://code.ros.org/mailman/listinfo/ros-users
--
Ingo Lütkebohle
Bielefeld University
http://www.techfak.uni-bielefeld.de/~iluetkeb/
PGP Fingerprint 3187 4DEC 47E6 1B1E 6F4F 57D4 CD90 C164 34AD CE5B
More information about the ros-users
mailing list