VideoDrafter: Content-Consistent Multi-Scene Video Generation with LLM
CoRR(2024)
摘要
The recent innovations and breakthroughs in diffusion models have
significantly expanded the possibilities of generating high-quality videos for
the given prompts. Most existing works tackle the single-scene scenario with
only one video event occurring in a single background. Extending to generate
multi-scene videos nevertheless is not trivial and necessitates to nicely
manage the logic in between while preserving the consistent visual appearance
of key content across video scenes. In this paper, we propose a novel
framework, namely VideoDrafter, for content-consistent multi-scene video
generation. Technically, VideoDrafter leverages Large Language Models (LLM) to
convert the input prompt into comprehensive multi-scene script that benefits
from the logical knowledge learnt by LLM. The script for each scene includes a
prompt describing the event, the foreground/background entities, as well as
camera movement. VideoDrafter identifies the common entities throughout the
script and asks LLM to detail each entity. The resultant entity description is
then fed into a text-to-image model to generate a reference image for each
entity. Finally, VideoDrafter outputs a multi-scene video by generating each
scene video via a diffusion process that takes the reference images, the
descriptive prompt of the event and camera movement into account. The diffusion
model incorporates the reference images as the condition and alignment to
strengthen the content consistency of multi-scene videos. Extensive experiments
demonstrate that VideoDrafter outperforms the SOTA video generation models in
terms of visual quality, content consistency, and user preference.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要