TY - JOUR
T1 - Small and Slim Deep Convolutional Neural Network for Mobile Device
AU - Winoto, Amadeus Suryo
AU - Kristianus, Michael
AU - Premachandra, Chinthaka
N1 - Funding Information:
This work was supported in part by the Branding Research Fund of Shibaura Institute of Technology.
Publisher Copyright:
© 2013 IEEE.
PY - 2020
Y1 - 2020
N2 - Recent development of deep convolutional neural networks (DCNN) devoted in creating a slim model for devices with lower specification such as embedded, mobile hardware, or microcomputer. Slim model can be achieved by minimizing computational complexity which theoretically will make processing time faster. Therefore, our focus is to build an architecture with minimum floating-point operation per second (FLOPs). In this work, we propose a small and slim architecture which later will be compared to state-of-the-art models. This architecture will be implemented into two models which are CustomNet and CustomNet2. Each of these models implements 3 convolutional blocks which reduce the computational complexity while maintains its accuracy and able to compete with state-of-the-art DCNN models. These models will be trained using ImageNet, CIFAR 10, CIFAR 100 and other datasets. The result will be compared based on accuracy, complexity, size, processing time, and trainable parameter. From the result, we found that one of our models which is CustomNet2, is better than MobileNet, MobileNet-v2, DenseNet, NASNetMobile in accuracy, trainable parameter, and complexity. For future implementation, this architecture can be adapted using region based DCNN for multiple object detection.
AB - Recent development of deep convolutional neural networks (DCNN) devoted in creating a slim model for devices with lower specification such as embedded, mobile hardware, or microcomputer. Slim model can be achieved by minimizing computational complexity which theoretically will make processing time faster. Therefore, our focus is to build an architecture with minimum floating-point operation per second (FLOPs). In this work, we propose a small and slim architecture which later will be compared to state-of-the-art models. This architecture will be implemented into two models which are CustomNet and CustomNet2. Each of these models implements 3 convolutional blocks which reduce the computational complexity while maintains its accuracy and able to compete with state-of-the-art DCNN models. These models will be trained using ImageNet, CIFAR 10, CIFAR 100 and other datasets. The result will be compared based on accuracy, complexity, size, processing time, and trainable parameter. From the result, we found that one of our models which is CustomNet2, is better than MobileNet, MobileNet-v2, DenseNet, NASNetMobile in accuracy, trainable parameter, and complexity. For future implementation, this architecture can be adapted using region based DCNN for multiple object detection.
KW - Artificial neural network
KW - deep learning
KW - image recognition
KW - machine learning
UR - http://www.scopus.com/inward/record.url?scp=85088704332&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85088704332&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2020.3005161
DO - 10.1109/ACCESS.2020.3005161
M3 - Article
AN - SCOPUS:85088704332
SN - 2169-3536
VL - 8
SP - 125210
EP - 125222
JO - IEEE Access
JF - IEEE Access
M1 - 9126546
ER -