Diffusion-based generative models have proven to be highly effective in various domains of synthesis. In this work, we propose a conditional paradigm utilizing the denoising diffusion probabilistic model (DDPM) to address the challenge of realistic and diverse action-conditioned 3D skeleton-based motion generation. The proposed method leverages bidirectional Markov chains to generate samples by inferring the reversed Markov chain based on the learned distribution mapping during the forward diffusion process. To the best of our knowledge, our work is the first to employ DDPM to synthesize a variable number of motion sequences conditioned on a categorical action. The proposed method is evaluated on the NTU RGB+D dataset and the NTU RGB+D two-person dataset, showing significant improvements over state-of-the-art motion generation methods.
Denoising diffusion probabilistic models for action-conditioned 3d motion generation
Bin RenSupervision
;
2024-01-01
Abstract
Diffusion-based generative models have proven to be highly effective in various domains of synthesis. In this work, we propose a conditional paradigm utilizing the denoising diffusion probabilistic model (DDPM) to address the challenge of realistic and diverse action-conditioned 3D skeleton-based motion generation. The proposed method leverages bidirectional Markov chains to generate samples by inferring the reversed Markov chain based on the learned distribution mapping during the forward diffusion process. To the best of our knowledge, our work is the first to employ DDPM to synthesize a variable number of motion sequences conditioned on a categorical action. The proposed method is evaluated on the NTU RGB+D dataset and the NTU RGB+D two-person dataset, showing significant improvements over state-of-the-art motion generation methods.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


