Skip to content

Linya-lab/Video_Decaptioning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

88 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep_Video_Decaptioning

teaser

Citation

If any part of our paper and repository is helpful to your work, please generously cite with:

@inproceedings{chu2021deep,
  title={Deep Video Decaptioning},
  author={Chu Pengpeng, Quan Weize, Wang Tong, Wang Pan, Ren Peiran and Yan Dong-Ming},
  booktitle = {The Proceedings of the British Machine Vision Conference (BMVC)},
  year={2021}
}

Introduction

In the context of news media and video entertainment, broadcasting programs from various languages, such as news, series or documentaries, there are frequently text captions or encrusted commercials or subtitles, which reduce visual attention and occlude parts of frames that may decrease the performance of automatic understanding systems.

In this paper, we propose a model to automatically remove subtitles from videos.

network

Preparation

  1. Install environment
conda env create -f environment.yml 
conda activate cpp
  1. Install Dependencies
  • ffmpeg (video to png)
  1. Install pretrained weight

Brief code instruction

Extract png files for each mp4 videos (use ./dataset/video2png.sh)

  • Note that we attached pretrained weight of the final model at onedrive. Please properly modify the path of pretrained weight for testing.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published