An evaluation of easy creation method that generates learning-friendly low-definition lecture video from high definition inputs

Satoshi Shimada, Kiyoshi Sugimoto, Shunichi Yonemura, Akira Kojima, Yoshimi Hukuhara

Research output: Contribution to journalArticlepeer-review

Abstract

Our method can create learning-friendly low-definition lecture video from a video captured by a high definition fixed camera, no specific equipment or training is needed. The proposed method automatically extracts the screen position, the speaker's dynamic position and the slide changes from the video of a lecture to produce a spatially-condensed low-resolution version of the video. We conduct user tests to evaluate the operation time and the quality of the videos produced by the proposed method and reach the following conclusions: (1) The proposed method can automatically generate about 640 × 480 pixel videos that are suitable for video distribution, then the full, high-definition videos (1920 × 1080 pixels) are subjected to non-linear reduction. (2) The edited videos well support learning. (3) All subjects could input the initial information needed for detecting positions of the speaker and the slide within just a few minutes.

Original languageEnglish
Pages (from-to)697-707
Number of pages11
JournalJournal of the Institute of Image Electronics Engineers of Japan
Volume41
Issue number6
Publication statusPublished - 2012

Keywords

  • Content creation
  • Face detection
  • Lecture video
  • Video archiving
  • Video editing

ASJC Scopus subject areas

  • Computer Science (miscellaneous)
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'An evaluation of easy creation method that generates learning-friendly low-definition lecture video from high definition inputs'. Together they form a unique fingerprint.

Cite this