StreamingT2V:
Consistent, Dynamic, and Extendable Long Video Generation from Text

1Picsart AI Resarch (PAIR), 2UT Austin 3SHI Labs @ Georgia Tech, Oregon & UIUC
*Equal Contribution

StreamingT2V is an advanced autoregressive technique that enables the creation of long videos featuring rich motion dynamics without any stagnation. It ensures temporal consistency throughout the video, aligns closely with the descriptive text, and maintains high frame-level image quality. Our demonstrations include successful examples of videos up to 1200 frames, spanning 2 minutes, and can be extended for even longer durations. Importantly, the effectiveness of StreamingT2V is not limited by the specific Text2Video model used, indicating that improvements in base models could yield even higher-quality videos.

Abstract

Text-to-video diffusion models enable the generation of high-quality videos that follow text instructions, making it easy to create diverse and individual content. However, existing approaches mostly focus on high-quality short video generation (typically 16 or 24 frames), ending up with hard-cuts when naively extended to the case of long video synthesis. To overcome these limitations, we introduce StreamingT2V, an autoregressive approach for long video generation of 80, 240, 600, 1200 or more frames with smooth transitions. The key components are: (i) a short-term memory block called conditional attention module (CAM), which conditions the current generation on the features extracted from the previous chunk via an attentional mechanism, leading to consistent chunk transitions, (ii) a long-term memory block called appearance preservation module, which extracts high-level scene and object features from the first video chunk to prevent the model from forgetting the initial scene, and (iii) a randomized blending approach that enables to apply a video enhancer autoregressively for infinitely long videos without inconsistencies between chunks. Experiments show that StreamingT2V generates high motion amount. In contrast, all competing image-to-video methods are prone to video stagnation when applied naively in an autoregressive manner. Thus, we propose with StreamingT2V a high-quality seamless text-to-long video generator that outperforms competitors with consistency and motion.

Method

The overall pipeline of StreamingT2V: In the Initialization Stage, the first 16-frame chunk is synthesized by a text-to-video model. In the Streaming T2V Stage, the new content for further frames are autoregressively generated. Finally, in the Streaming Refinement Stage the generated long video (600, 1200 frames or more) is autoregressively enhanced by applying a high-resolution text-to-short-video model, equipped with our randomized blending approach.

Detailed Method Pipeline

Method overview: StreamingT2V extends a video diffusion model (VDM) by the conditional attention module (CAM) as short-term memory, and the appearance preservation module (APM) as long-term memory. CAM conditions the VDM on the previous chunk using a frame encoder $$\mathcal{E}_{\mathrm{cond}}$$.
The attentional mechanism of CAM leads to smooth transitions between chunks and videos with high motion amount at the same time. APM extracts from an anchor frame high-level image features and injects it to the text cross-attentions of the VDM. APM helps to preserve object/scene features across the autogressive video generation.

Results

1200 FRAMES @ 2 MINUTES

"Explore the coral gardens of the sea..." "Camera moving in a wide bright ice cave." "Experience the dance of jellyfish..." "Wide shot of battlefield, stormtroopers running..."

600 FRAMES @ 1 MINUTE

"Camera following a pack of crows flying in the sky." "Marvel at the diversity of bee species..." "Close flyover over a large wheat field..." "Camera moving down into colorful coral reefs..."

240 FRAMES @ 24 SECONDS

"Fluids mixing and changing colors." "Santa Claus is dancing." "Explosion."
"Flying through nebulas and stars." "Camera moving closely over beautiful roses blooming timelapse." "Documenting the growth cycle of vibrant lavender flowers..."

80 FRAMES @ 8 SECONDS

"Ants, beetles and centipede nest." "Fishes swimming in ocean camera moving." "A squirrel on a table full of big nuts." "Enter the fascinating world of bees..."
"A hummingbird flutters among colorful flowers..." "Drone fly to a mansion in a tropical forest." "Flying above a breathtaking limestone structure..." "Beneath the majestic Northern Lights of Iceland."

BibTeX

If you use our work in your research, please cite our publication:

@article{streamingt2v,
    title={StreamingT2V: Consistent, Dynamic, and Extendable Long Video Generation from Text},
    author={Henschel, Roberto and Khachatryan, Levon and Hayrapetyan, Daniil and Poghosyan, Hayk and Tadevosyan, Vahram and Wang, Zhangyang and Navasardyan, Shant and Shi, Humphrey},
    journal={arXiv preprint arXiv:2403.14773},
    year={2024}
  }