TY - GEN
T1 - Mobile robot self-localization based on tracked scale and rotation invariant feature points by using an omnidirectional camera
AU - Tasaki, Tsuyoshi
AU - Tokura, Seiji
AU - Sonoura, Takafumi
AU - Ozaki, Fumio
AU - Matsuhira, Nobuto
PY - 2010/12/1
Y1 - 2010/12/1
N2 - Self-localization is important for mobile robots in order to move accurately, and many works use an omnidirectional camera for self-localization. However, it is difficult to realize fast and accurate self-localization by using only one omnidirectional camera without any calibration. For its realization, we use "tracked scale and rotation invariant feature points" that are regarded as landmarks. These landmarks can be tracked and do not change for a "long" time. In a landmark selection phase, robots detect the feature points by using both a fast tracking method and a slow "Speed Up Robust Features (SURF)" method. After detection, robots select landmarks from among detected feature points by using Support Vector Machine (SVM) trained by feature vectors based on observation positions. In a self-localization phase, robots detect landmarks while switching detection methods dynamically based on a tracking error criterion that is calculated easily even in the uncalibrated omnidirectional image. We performed experiments in an approximately 10 [m] x 10 [m] mock supermarket by using a navigation robot ApriTau™ that had an omnidirectional camera on its top. The results showed that ApriTau™ could localize 2.9 times faster and 4.2 times more accurately by using the developed method than by using only the SURF method. The results also showed that ApriTau™ could arrive at a goal within a 3 [cm] error from various initial positions at the mock supermarket.
AB - Self-localization is important for mobile robots in order to move accurately, and many works use an omnidirectional camera for self-localization. However, it is difficult to realize fast and accurate self-localization by using only one omnidirectional camera without any calibration. For its realization, we use "tracked scale and rotation invariant feature points" that are regarded as landmarks. These landmarks can be tracked and do not change for a "long" time. In a landmark selection phase, robots detect the feature points by using both a fast tracking method and a slow "Speed Up Robust Features (SURF)" method. After detection, robots select landmarks from among detected feature points by using Support Vector Machine (SVM) trained by feature vectors based on observation positions. In a self-localization phase, robots detect landmarks while switching detection methods dynamically based on a tracking error criterion that is calculated easily even in the uncalibrated omnidirectional image. We performed experiments in an approximately 10 [m] x 10 [m] mock supermarket by using a navigation robot ApriTau™ that had an omnidirectional camera on its top. The results showed that ApriTau™ could localize 2.9 times faster and 4.2 times more accurately by using the developed method than by using only the SURF method. The results also showed that ApriTau™ could arrive at a goal within a 3 [cm] error from various initial positions at the mock supermarket.
UR - http://www.scopus.com/inward/record.url?scp=78651480013&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=78651480013&partnerID=8YFLogxK
U2 - 10.1109/IROS.2010.5649848
DO - 10.1109/IROS.2010.5649848
M3 - Conference contribution
AN - SCOPUS:78651480013
SN - 9781424466757
T3 - IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings
SP - 5202
EP - 5207
BT - IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings
T2 - 23rd IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010
Y2 - 18 October 2010 through 22 October 2010
ER -