抄録
Our method can create learning-friendly low-definition lecture video from a video captured by a high definition fixed camera, no specific equipment or training is needed. The proposed method automatically extracts the screen position, the speaker's dynamic position and the slide changes from the video of a lecture to produce a spatially-condensed low-resolution version of the video. We conduct user tests to evaluate the operation time and the quality of the videos produced by the proposed method and reach the following conclusions: (1) The proposed method can automatically generate about 640 × 480 pixel videos that are suitable for video distribution, then the full, high-definition videos (1920 × 1080 pixels) are subjected to non-linear reduction. (2) The edited videos well support learning. (3) All subjects could input the initial information needed for detecting positions of the speaker and the slide within just a few minutes.
本文言語 | English |
---|---|
ページ(範囲) | 697-707 |
ページ数 | 11 |
ジャーナル | Journal of the Institute of Image Electronics Engineers of Japan |
巻 | 41 |
号 | 6 |
出版ステータス | Published - 2012 |
ASJC Scopus subject areas
- コンピュータ サイエンス(その他)
- 電子工学および電気工学