The emergence of diffusion models has greatly propelled the progress in image and video generation. Recently, some efforts have been made in controllable video generation, including text-to-video, image-to-video generation, video editing, and video motion control, among which camera motion control is an important topic. However, existing camera motion control methods rely on training a temporal camera module, and necessitate substantial computation resources due to the large amount of parameters in video generation models. Moreover, existing methods pre-define camera motion types during training, which limits their flexibility in camera control, preventing the realization of some specific camera controls, such as various camera movements in films. Therefore, to reduce training costs and achieve flexible camera control, we propose MotionMaster, a novel training-free video motion transfer model, which disentangles camera motions and object motions in source videos, and transfers the extracted camera motions to new videos. We first propose a one-shot camera motion disentanglement method to extract camera motion from a single source video, which separates the moving objects from the background and estimates the camera motion in the moving objects region based on the motion in the background by solving a Poisson equation. Furthermore, we propose a few-shot camera motion disentanglement method to extract the common camera motion from multiple videos with similar camera motions, which employs a window-based clustering technique to extract the common features in temporal attention maps of multiple videos. Finally, we propose a motion combination method to combine different types of camera motions together, enabling our model a more controllable and flexible camera control. Extensive experiments demonstrate that our training-free approach can effectively decouple camera-object motion and apply the decoupled camera motion to a wide range of controllable video generation tasks, achieving flexible and diverse camera motion control.
Our demo is divided into two parts: firstly, all our experimental results, and secondly, we provide a gallery to showcase various effects.
We find that the temporal attention maps determine the generated video motions. By switching the temporal attention maps between the two videos in the first row, we have the results in the second row, whose videos motions are totally swithed.
Here are the one-shot camera motion transfer results compared to AnimateDiff+Lora and MotionCtrl.
Here are the few-shot camera motion transfer results compared to AnimateDiff+Lora and MotionCtrl.
Here are the results combined with different camera motions.
Here are the results of variable-speed zoom in real films.
Here are abaltion studies on the one-shot camera motion disentanglement.
Here are abaltion studies on the few-shot camera motion disentanglement.
The camera in the background zooms in while the camera in the foreground keeps fixed.
The camera in the background zooms out while the camera in the foreground keeps fixed.