UPDF AI

Guided Motion Diffusion for Controllable Human Motion Synthesis

Korrawe Karunratanakul,Konpat Preechakul,Supasorn Suwajanakorn,Siyu Tang

2023 · DOI: 10.1109/ICCV51070.2023.00205
IEEE International Conference on Computer Vision · 引用数 141

TLDR

This work proposes an effective feature projection scheme that manipulates motion representation to enhance the coherency between spatial information and local poses and introduces a new dense guidance approach to turn a sparse signal into denser signals to guide the generated motion to the given constraints.

摘要

Denoising diffusion models have shown great promise in human motion synthesis conditioned on natural language descriptions. However, integrating spatial constraints, such as pre-defined motion trajectories and obstacles, remains a challenge despite being essential for bridging the gap between isolated human motion and its surrounding environment. To address this issue, we propose Guided Motion Diffusion (GMD), a method that incorporates spatial constraints into the motion generation process. Specifically, we propose an effective feature projection scheme that manipulates motion representation to enhance the coherency between spatial information and local poses. Together with a new imputation formulation, the generated motion can reliably conform to spatial constraints such as global motion trajectories. Furthermore, given sparse spatial constraints (e.g. sparse keyframes), we introduce a new dense guidance approach to turn a sparse signal, which is susceptible to being ignored during the reverse steps, into denser signals to guide the generated motion to the given constraints. Our extensive experiments justify the development of GMD, which achieves a significant improvement over state-of-the-art methods in text-based motion generation while allowing control of the synthesized motions with spatial constraints.