TY - JOUR
T1 - Multi-Scale Fully Convolutional Network-Based Semantic Segmentation for Mobile Robot Navigation
AU - Dang, Thai Viet
AU - Bui, Ngoc Tam
N1 - Funding Information:
This research is funded by Hanoi University of Science and Technology (HUST) under project number T2022-PC-029. The School of Mechanical Engineering at HUST is gratefully acknowledged for providing funding, guidance, and expertise. This work was also supported by the Centennial Shibaura Institute of Technology Action for the 100th anniversary of Shibaura Institute of Technology to enter the top ten Asian Institutes of Technology.
Publisher Copyright:
© 2023 by the authors.
PY - 2023/2
Y1 - 2023/2
N2 - In computer vision and mobile robotics, autonomous navigation is crucial. It enables the robot to navigate its environment, which consists primarily of obstacles and moving objects. Robot navigation employing impediment detections, such as walls and pillars, is not only essential but also challenging due to real-world complications. This study provides a real-time solution to the problem of obtaining hallway scenes from an exclusive image. The authors predict a dense scene using a multi-scale fully convolutional network (FCN). The output is an image with pixel-by-pixel predictions that can be used for various navigation strategies. In addition, a method for comparing the computational cost and precision of various FCN architectures using VGG-16 is introduced. The binary semantic segmentation and optimal obstacle avoidance navigation of autonomous mobile robots are two areas in which our method outperforms the methods of competing works. The authors successfully apply perspective correction to the segmented image in order to construct the frontal view of the general area, which identifies the available moving area. The optimal obstacle avoidance strategy is comprised primarily of collision-free path planning, reasonable processing time, and smooth steering with low steering angle changes.
AB - In computer vision and mobile robotics, autonomous navigation is crucial. It enables the robot to navigate its environment, which consists primarily of obstacles and moving objects. Robot navigation employing impediment detections, such as walls and pillars, is not only essential but also challenging due to real-world complications. This study provides a real-time solution to the problem of obtaining hallway scenes from an exclusive image. The authors predict a dense scene using a multi-scale fully convolutional network (FCN). The output is an image with pixel-by-pixel predictions that can be used for various navigation strategies. In addition, a method for comparing the computational cost and precision of various FCN architectures using VGG-16 is introduced. The binary semantic segmentation and optimal obstacle avoidance navigation of autonomous mobile robots are two areas in which our method outperforms the methods of competing works. The authors successfully apply perspective correction to the segmented image in order to construct the frontal view of the general area, which identifies the available moving area. The optimal obstacle avoidance strategy is comprised primarily of collision-free path planning, reasonable processing time, and smooth steering with low steering angle changes.
KW - computer vision
KW - fully convolutional networks
KW - mobile robot
KW - navigation
KW - obstacle avoidance
KW - semantic segmentation
UR - http://www.scopus.com/inward/record.url?scp=85147865439&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85147865439&partnerID=8YFLogxK
U2 - 10.3390/electronics12030533
DO - 10.3390/electronics12030533
M3 - Article
AN - SCOPUS:85147865439
SN - 2079-9292
VL - 12
JO - Electronics (Switzerland)
JF - Electronics (Switzerland)
IS - 3
M1 - 533
ER -