AR4D: Autoregressive 4D Generation from Monocular Videos

1University of Science and Technology of China, 2Microsoft Research Asia

TL;DR: We present a novel method for 4D generation from monocular videos without relying on SDS,
delivering greater diversity, improved spatial-temporal consistency, and better alignment with input prompts.

(a) Input Video

(b) Novel-view Video 1

(c) Reconstructed Input Video

(d) Novel-view Video 2

(e) Rendered Multi-view Video

(f) Rendered Multi-view Depth

(a) Input Video

(b) Novel-view Video 1

(c) Reconstructed Input Video

(d) Novel-view Video 2

(e) Rendered Multi-view Video

(f) Rendered Multi-view Depth

(a) Input Video

(b) Novel-view Video 1

(c) Reconstructed Input Video

(d) Novel-view Video 2

(e) Rendered Multi-view Video

(f) Rendered Multi-view Depth

Abstract

Recent advancements in generative models have ignited substantial interest in dynamic 3D content creation (\ie, 4D generation). Existing approaches primarily rely on Score Distillation Sampling (SDS) to infer novel-view videos, typically leading to issues such as limited diversity, spatial-temporal inconsistency and poor prompt alignment, due to the inherent randomness of SDS. To tackle these problems, we propose AR4D, a novel paradigm for SDS-free 4D generation. Specifically, our paradigm consists of three stages. To begin with, for a monocular video that is either generated or captured, we first utilize pre-trained expert models to create a 3D representation of the first frame, which is further fine-tuned to serve as the canonical space. Subsequently, motivated by the fact that videos happen naturally in an autoregressive manner, we propose to generate each frame's 3D representation based on its previous frame's representation, as this autoregressive generation manner can facilitate more accurate geometry and motion estimation. Meanwhile, to prevent overfitting during this process, we introduce a progressive view sampling strategy, utilizing priors from pre-trained large-scale 3D reconstruction models. To avoid appearance drift introduced by autoregressive generation, we further incorporate a refinement stage based on a global deformation field and the geometry of each frame’s 3D representation. Extensive experiments have demonstrated that AR4D can achieve state-of-the-art 4D generation without SDS, delivering greater diversity, improved spatial-temporal consistency, and better alignment with input prompts.

Method

Interpolate start reference image.

Illustration of our method. To enable SDS-free 4D generation, we propose a three-stage approach consisting of Initialization, Generation, and Refinement.

Comparisons with SOTA methods

Scene 1

(a) Consistent4D

(b) SV4D

(c) STAG4D

(d) AR4D (Ours)

Scene 2

(a) Consistent4D

(b) SV4D

(c) STAG4D

(d) AR4D (Ours)

Scene 3

(a) Consistent4D

(b) SV4D

(c) STAG4D

(d) AR4D (Ours)

BibTeX

@article{zhu2024ar4d,
  author    = {Zhu, Hanxin and He, Tianyu and Yu, Xiqian and Guo, Junliang and Chen, Zhibo and Bian, Jiang},
  title     = {AR4D: Autoregressive 4D Generation from Monocular Videos},
  journal   = {arXiv},
  year      = {2024},
}