Recently,there has been an upsurge of activity in image-based non-photorealistic rendering(NPR),and in particular portrait image stylisation,due to the advent of neural style transfer(NST).However,the state of perform...Recently,there has been an upsurge of activity in image-based non-photorealistic rendering(NPR),and in particular portrait image stylisation,due to the advent of neural style transfer(NST).However,the state of performance evaluation in this field is poor,especially compared to the norms in the computer vision and machine learning communities.Unfortunately,the task of evaluating image stylisation is thus far not well defined,since it involves subjective,perceptual,and aesthetic aspects.To make progress towards a solution,this paper proposes a new structured,threelevel,benchmark dataset for the evaluation of stylised portrait images.Rigorous criteria were used for its construction,and its consistency was validated by user studies.Moreover,a new methodology has been developed for evaluating portrait stylisation algorithms,which makes use of the different benchmark levels as well as annotations provided by user studies regarding the characteristics of the faces.We perform evaluation for a wide variety of image stylisation methods(both portrait-specific and general purpose,and also both traditional NPR approaches and NST)using the new benchmark dataset.展开更多
Extensive research has explored human motion generation,but the generated sequences are influenced by different motion styles.For instance,the act of walking with joy and sorrow evokes distinct effects on a character...Extensive research has explored human motion generation,but the generated sequences are influenced by different motion styles.For instance,the act of walking with joy and sorrow evokes distinct effects on a character’s motion.Due to the difficulties in motion capture with styles,the available data for style research are also limited.To address the problems,we propose ASMNet,an action and style-conditioned motion generative network.This network ensures that the generated human motion sequences not only comply with the provided action label but also exhibit distinctive stylistic features.To extract motion features from human motion sequences,we design a spatial temporal extractor.Moreover,we use the adaptive instance normalization layer to inject style into the target motion.Our results are comparable to state-of-the-art approaches and display a substantial advantage in both quantitative and qualitative evaluations.The code is available at https://github.com/ZongYingLi/ASMNet.git.展开更多
文摘Recently,there has been an upsurge of activity in image-based non-photorealistic rendering(NPR),and in particular portrait image stylisation,due to the advent of neural style transfer(NST).However,the state of performance evaluation in this field is poor,especially compared to the norms in the computer vision and machine learning communities.Unfortunately,the task of evaluating image stylisation is thus far not well defined,since it involves subjective,perceptual,and aesthetic aspects.To make progress towards a solution,this paper proposes a new structured,threelevel,benchmark dataset for the evaluation of stylised portrait images.Rigorous criteria were used for its construction,and its consistency was validated by user studies.Moreover,a new methodology has been developed for evaluating portrait stylisation algorithms,which makes use of the different benchmark levels as well as annotations provided by user studies regarding the characteristics of the faces.We perform evaluation for a wide variety of image stylisation methods(both portrait-specific and general purpose,and also both traditional NPR approaches and NST)using the new benchmark dataset.
基金supported by National Natural Science Foundation of China(No.62203476)Natural Science Foundation of Shenzhen(No.JCYJ20230807120801002).
文摘Extensive research has explored human motion generation,but the generated sequences are influenced by different motion styles.For instance,the act of walking with joy and sorrow evokes distinct effects on a character’s motion.Due to the difficulties in motion capture with styles,the available data for style research are also limited.To address the problems,we propose ASMNet,an action and style-conditioned motion generative network.This network ensures that the generated human motion sequences not only comply with the provided action label but also exhibit distinctive stylistic features.To extract motion features from human motion sequences,we design a spatial temporal extractor.Moreover,we use the adaptive instance normalization layer to inject style into the target motion.Our results are comparable to state-of-the-art approaches and display a substantial advantage in both quantitative and qualitative evaluations.The code is available at https://github.com/ZongYingLi/ASMNet.git.