Video Deblurring

cross-reference: Image Deblurring

STFAN

Spatio-Temporal Filter Adaptive Network for Video Deblurring (ICCV 2019) - SenseTime + Nanjing University of Science and Technology
Project | PyTorch 1.0

  1. We propose a filter adaptive convolutional (FAC) layer that applies the generated element-wise filters to feature transformation, which is utilized for two spatially variant tasks, i.e. alignment and deblurring in the feature domain.
  2. We propose a novel spatio-temporal filter adaptive network (STFAN) for video deblurring. It integrates the frame alignment and deblurring into a unified framework without explicit motion estimation and formulates them as two spatially variant convolution process based on the FAC layers. small_img

CDVD-TSP

Cascaded Deep Video Deblurring Using Temporal Sharpness Prior (CVPR 2020) - Nanjing University of Science and Technology
Project | PyTorch code | Author’s blog overview (chinese)

  • We propose a simple and compact deep CNN model that simultaneously estimates the optical flow(PWC-Net) and latent frames for video deblurring.
  • To better explore the properties of consecutive frames, we develop a temporal sharpness prior to constrain deep CNN models.

Other Note: Motion blur

TSP is designed for hand-held cameras dataset from DVD based on below assumption

As demonstrated in [4], the blur in the video is irregular, and thus there exist some pixels that are not blurred. Following the conventional method [4], we explore these sharpness pixels to help video deblurring.

Hence it might not suitable for continuous motion blur (e.g. racing)

BIN

Frame interpolation/BIN
EDVR is better than BIN when only deblur