Accurate and temporally consistent modeling of human bodies is essential for a wide range of applications,including character animation,understanding human social behavior,and AR/VR interfaces.Capturing human motion a...Accurate and temporally consistent modeling of human bodies is essential for a wide range of applications,including character animation,understanding human social behavior,and AR/VR interfaces.Capturing human motion accurately from a monocular image sequence remains challenging;modeling quality is strongly influenced by temporal consistency of the captured body motion.Our work presents an elegant solution to integrating temporal constraints during fitting.This increases both temporal consistency and robustness during optimization.In detail,we derive parameters of a sequence of body models,representing shape and motion of a person.We optimize these parameters over the complete image sequence,fitting a single consistent body shape while imposing temporal consistency on the body motion,assuming body joint trajectories to be linear over short time.Our approach enables the derivation of realistic 3D body models from image sequences,including jaw pose,facial expression,and articulated hands.Our experiments show that our approach accurately estimates body shape and motion,even for challenging movements and poses.Further,we apply it to the particular application of sign language analysis,where accurate and temporally consistent motion modelling is essential,and show that the approach is well-suited to this kind of application.展开更多
In this paper, we present a novel approach for assessing and interacting with surface tracking algorithms targeting video manipulation in postproduction. As tracking inaccuracies are unavoidable,we enable the user to ...In this paper, we present a novel approach for assessing and interacting with surface tracking algorithms targeting video manipulation in postproduction. As tracking inaccuracies are unavoidable,we enable the user to provide small hints to the algorithms instead of correcting erroneous results afterwards. Based on 2D mesh warp-based optical flow estimation, we visualize results and provide tools for user feedback in a consistent reference system, texture space. In this space, accurate tracking results are reflected by static appearance, and errors can easily be spotted as apparent change. A variety of established tools can be utilized to visualize and assess the change between frames. User interaction to improve tracking results becomes more intuitive in texture space, as it can focus on a small region rather than a moving object.We show how established tools can be implemented for interaction in texture space to provide a more intuitive interface allowing more effective and accurate user feedback.展开更多
基金This work was partly funded by the European Union’s Horizon 2020 Research and Innovation Programme under Agreement No.952147(Invictus)as well as the German Federal Ministry of Education and Research(BMBF)through the Research Program MoDL under Contract No.01 IS 20044.
文摘Accurate and temporally consistent modeling of human bodies is essential for a wide range of applications,including character animation,understanding human social behavior,and AR/VR interfaces.Capturing human motion accurately from a monocular image sequence remains challenging;modeling quality is strongly influenced by temporal consistency of the captured body motion.Our work presents an elegant solution to integrating temporal constraints during fitting.This increases both temporal consistency and robustness during optimization.In detail,we derive parameters of a sequence of body models,representing shape and motion of a person.We optimize these parameters over the complete image sequence,fitting a single consistent body shape while imposing temporal consistency on the body motion,assuming body joint trajectories to be linear over short time.Our approach enables the derivation of realistic 3D body models from image sequences,including jaw pose,facial expression,and articulated hands.Our experiments show that our approach accurately estimates body shape and motion,even for challenging movements and poses.Further,we apply it to the particular application of sign language analysis,where accurate and temporally consistent motion modelling is essential,and show that the approach is well-suited to this kind of application.
基金partially funded by the German Science Foundation(Grant No.DFG EI524/2-1)by the European Commission(Grant Nos.FP7-288238 SCENE and H2020-644629 AutoPost)
文摘In this paper, we present a novel approach for assessing and interacting with surface tracking algorithms targeting video manipulation in postproduction. As tracking inaccuracies are unavoidable,we enable the user to provide small hints to the algorithms instead of correcting erroneous results afterwards. Based on 2D mesh warp-based optical flow estimation, we visualize results and provide tools for user feedback in a consistent reference system, texture space. In this space, accurate tracking results are reflected by static appearance, and errors can easily be spotted as apparent change. A variety of established tools can be utilized to visualize and assess the change between frames. User interaction to improve tracking results becomes more intuitive in texture space, as it can focus on a small region rather than a moving object.We show how established tools can be implemented for interaction in texture space to provide a more intuitive interface allowing more effective and accurate user feedback.