VideoINR: Learning Video Implicit Neural Representation
for Continuous Space-Time Super-Resolution

1USTC 2UCSD 3UIUC 4UT Austin 5U of Oregon 6Picsart AI Research (PAIR)
CVPR 2022
Original Video (Bicubic upsampled)
VideoINR (ours)

VideoINR represents videos with continuous space-time resolution.

Abstract

Videos typically record the streaming and continuous visual data as discrete consecutive frames. Since the storage cost is expensive for videos of high fidelity, most of them are stored in a relatively low resolution and frame rate. Recent works of Space-Time Video Super-Resolution (STVSR) are developed to incorporate temporal interpolation and spatial super-resolution in a unified framework. However, most of them only support a fixed up-sampling scale, which limits their flexibility and applications. In this work, instead of following the discrete representations, we propose Video Implicit Neural Representation (VideoINR), and we show its applications for STVSR. The learned implicit neural representation can be decoded to videos of arbitrary spatial resolution and frame rate. We show that VideoINR achieves competitive performances with state-of-the-art STVSR methods on common up-sampling scales and significantly outperforms prior works on continuous and out-of-training-distribution scales.

Video

Continuous Video Representation

Video Implicit Neural Representation (VideoINR) maps any 3D space-time coordinate to an RGB value. This nature enables extending the latent interpolation space of Space-Time Video Super Resolution (STVSR) from fixed space and time scales to arbitrary frame rate and spatial resolution.

VideoINR: Pipeline

Two input frames are concatenated and encoded as a discrete feature map. Based on the feature, the spatial and temporal implicit neural representations decode a 3D space-time coordinate to a motion flow vector. We then sample a new feature vector by warping according to the motion flow, and decode it as the RGB prediction of the query coordinate. We omit the multi-scale feature aggregation part in this figure.

Zooming in on Continuous Videos

VideoINR defines continuous representations for videos. On a continuous representation with arbitrary space-time resolution, we can zoom in on videos and transform them into slow motion simultaneously, while maintaining high fidelity. We compare VideoINR with the raw pixels and bilinear interpolation in the following.

Spatial & Temporal SR

Videos are first up-sampled spatially with scales from 4 to 16, then temporally with scales from 4 to 32.

Pixels
Bilinear resize
VideoINR (ours)

Space-time SR

Videos are up-sampled in space and time simultaneously. The space scale increases from 4 to 12 and the time scale increases from 4 to 16.

Pixels
Bilinear resize
VideoINR (ours)

Comparison with State-of-the-art

We compare VideoINR with TMNet, previous state-of-the-art method for the Space-Time Video Super-Resolution (STVSR) task. The up-sampling space and time scale are set to 4 and 8, respectively.

Input (Bilinear resized)
TMNet
VideoINR (ours)

BibTeX

@article{chen2022vinr,
  author    = {Chen, Zeyuan and Chen, Yinbo and Liu, Jingwen and Xu, Xingqian and Goel, Vidit and Wang, Zhangyang and Shi, Humphrey and Wang, Xiaolong},
  title     = {VideoINR: Learning Video Implicit Neural Representation for\\Continuous Space-Time Super-Resolution},
  journal   = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year      = {2022},
}