TY - GEN
T1 - Vision and laser sensor data fusion technique for target approaching by outdoor mobile robot
AU - Chand, Aneesh
AU - Yuta, Shin'ichi
PY - 2010
Y1 - 2010
N2 - The authors have been developing an outdoor mobile robot intended to provide increased traveling distance by autonomously negotiating and crossing a road crossing intersection while traveling along pedestrian sidewalks in an urban environment. In this paper, high precision navigation towards a pedestrian push-button box by a mobile robot for the autonomous activation of the button is presented. We show a dual sensor fusion technique using a monocular camera and laser range sensor with which an outdoor mobile robot can detect, localize and then accurately navigate towards a button box so that it could autonomously press the pedestrian push button in order to trigger the crossing sequence. The method involves determining the image formation of the target on the image sensor of the camera, using it to estimate the object position in the real world and then using data from the laser range sensor to acquire a precise location of the object relative to the robot and finally perform the path planning. A two-tiered validation system, one at the vision level and the second at the laser scan data level, detects inaccurate detections and results in a robust system. The proposed method is also applicable for any form of target approaching. Experimental results verify the efficacy of the system and concluding remarks are also given.
AB - The authors have been developing an outdoor mobile robot intended to provide increased traveling distance by autonomously negotiating and crossing a road crossing intersection while traveling along pedestrian sidewalks in an urban environment. In this paper, high precision navigation towards a pedestrian push-button box by a mobile robot for the autonomous activation of the button is presented. We show a dual sensor fusion technique using a monocular camera and laser range sensor with which an outdoor mobile robot can detect, localize and then accurately navigate towards a button box so that it could autonomously press the pedestrian push button in order to trigger the crossing sequence. The method involves determining the image formation of the target on the image sensor of the camera, using it to estimate the object position in the real world and then using data from the laser range sensor to acquire a precise location of the object relative to the robot and finally perform the path planning. A two-tiered validation system, one at the vision level and the second at the laser scan data level, detects inaccurate detections and results in a robust system. The proposed method is also applicable for any form of target approaching. Experimental results verify the efficacy of the system and concluding remarks are also given.
UR - http://www.scopus.com/inward/record.url?scp=79952922766&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=79952922766&partnerID=8YFLogxK
U2 - 10.1109/ROBIO.2010.5723573
DO - 10.1109/ROBIO.2010.5723573
M3 - Conference contribution
AN - SCOPUS:79952922766
SN - 9781424493173
T3 - 2010 IEEE International Conference on Robotics and Biomimetics, ROBIO 2010
SP - 1624
EP - 1629
BT - 2010 IEEE International Conference on Robotics and Biomimetics, ROBIO 2010
T2 - 2010 IEEE International Conference on Robotics and Biomimetics, ROBIO 2010
Y2 - 14 December 2010 through 18 December 2010
ER -