期刊文献+
共找到1篇文章
< 1 >
每页显示 20 50 100
ASMNet:Action and Style-Conditioned Motion Generative Network for 3D Human Motion Generation
1
作者 Zongying Li Yong Wang +3 位作者 Xin Du Can Wang Reinhard Koch Mengyuan Liu 《Cyborg and Bionic Systems》 2024年第1期699-708,共10页
Extensive research has explored human motion generation,but the generated sequences are influenced by different motion styles.For instance,the act of walking with joy and sorrow evokes distinct effects on a character... Extensive research has explored human motion generation,but the generated sequences are influenced by different motion styles.For instance,the act of walking with joy and sorrow evokes distinct effects on a character’s motion.Due to the difficulties in motion capture with styles,the available data for style research are also limited.To address the problems,we propose ASMNet,an action and style-conditioned motion generative network.This network ensures that the generated human motion sequences not only comply with the provided action label but also exhibit distinctive stylistic features.To extract motion features from human motion sequences,we design a spatial temporal extractor.Moreover,we use the adaptive instance normalization layer to inject style into the target motion.Our results are comparable to state-of-the-art approaches and display a substantial advantage in both quantitative and qualitative evaluations.The code is available at https://github.com/ZongYingLi/ASMNet.git. 展开更多
关键词 action conditioned motion generative network style conditioned D human motion generation spatial temporal extractor style research motion capture human motion generationbut
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部