Three-Dimensional shape reconstruction from a single image by deep learning

Kentaro Sakai, Yoshiaki Yasumura

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)


Reconstructing a three-dimensional (3D) shape from a single image is one of the main topics in the field of computer vision. Some of the methods for 3D reconstruction adopt machine learning. These methods use machine learning for acquiring the relationship between 3D shape and 2D image, and reconstruct 3D shapes by using the learned relationship. However, since only predefined features (pixels in the image) are used, it is not possible to obtain the desired features of the 2D image for 3D reconstruction. Therefore, this paper presents a method for reconstructing 3D shapes by learning features of 2D images using deep learning. This method uses Convolutional Neural Network (CNN) for feature learning to reconstruct a 3D shape. Pooling layers and convolutional layers of the CNN capture spatial information about images and automatically select valuable image features. This paper presents two types of the reconstruction methods. The first one is to first estimate the normal vector of the object, and then reconstruct the 3D shape from the normal vector by deep learning. The second one is direct reconstruction of the 3D shape from an image by a deep neural network. The experimental results using human face images showed that the proposed method can reconstruct 3D shapes with higher accuracy than the previous methods.

Original languageEnglish
Pages (from-to)347-351
Number of pages5
JournalInternational Journal of Advanced Computer Science and Applications
Issue number2
Publication statusPublished - 2020


  • 3D reconstruction
  • Computer vision
  • Convolutional neural network
  • Deep learning
  • Feature learning
  • Normal vector

ASJC Scopus subject areas

  • Computer Science(all)


Dive into the research topics of 'Three-Dimensional shape reconstruction from a single image by deep learning'. Together they form a unique fingerprint.

Cite this