Background Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success.However,few methods exist for generating full-body animation...Background Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success.However,few methods exist for generating full-body animations,and the portability of virtual character gestures and facial animations has not received sufficient attention.Methods Therefore,we propose a deep-learning-based audio-to-animation-and-blendshape(Audio2AB)network that generates gesture animations and ARK it's 52 facial expression parameter blendshape weights based on audio,audio-corresponding text,emotion labels,and semantic relevance labels to generate parametric data for full-body animations.This parameterization method can be used to drive full-body animations of virtual characters and improve their portability.In the experiment,we first downsampled the gesture and facial data to achieve the same temporal resolution for the input,output,and facial data.The Audio2AB network then encoded the audio,audio-corresponding text,emotion labels,and semantic relevance labels,and then fused the text,emotion labels,and semantic relevance labels into the audio to obtain better audio features.Finally,we established links between the body,gestures,and facial decoders and generated the corresponding animation sequences through our proposed GAN-GF loss function.Results By using audio,audio-corresponding text,and emotional and semantic relevance labels as input,the trained Audio2AB network could generate gesture animation data containing blendshape weights.Therefore,different 3D virtual character animations could be created through parameterization.Conclusions The experimental results showed that the proposed method could generate significant gestures and facial animations.展开更多
A virtual character is a design of a fictitious creature with distinctive characteristics created by people,and Disney virtual characters are those Intellectual Property(IP)images that appeared in Disneyland and Disne...A virtual character is a design of a fictitious creature with distinctive characteristics created by people,and Disney virtual characters are those Intellectual Property(IP)images that appeared in Disneyland and Disney movies.This investigation aimed to explore why many younger females are keen to spend money on Disney virtual characters.This paper adopted the Marketing Mix Theory strategy(product,price,placement,and promotion(4P)),and the SWOT analysis method has been utilized.This paper investigated the relationship between the 4Ps and consumers’purchasing intentions,and it turned out that unique design and effective promotion in this Disney case would promote consumers’purchase intention,while the higher price and less accessible placements affected their purchase intentions.Thus,the high price and limited places somewhat inhibit customers’desire to buy;due to the attractiveness of the product itself and the promotion on the internet,the target consumers are still willing to consume.展开更多
The rational design of organic functional devices relies on understanding structure-propertyperformance relationships through multi-scale characterization.However,traditional characterizations are costly and require m...The rational design of organic functional devices relies on understanding structure-propertyperformance relationships through multi-scale characterization.However,traditional characterizations are costly and require multidisciplinary expertise.Here we present OCNet,a domain-knowledge-enhanced representation learning framework that,for the first time,enables unified virtual characterization from molecules to devices.Pre-trained on over ten million selfgenerated conjugated molecules and dimers,OCNet learns generalizable microscopic representations comparable to expert-crafted features.As a result,it surpasses state-of-the-art models by over 20%in predicting key computed and experimental molecular optoelectronic properties.OCNet further provides the first transferable model for predicting transfer integrals in thin films,enabling accurate mesoscale carrier mobility estimation via multiscale simulations.By integrating tight-binding-level electronic descriptors,OCNet achieves near real-time,accurate prediction of device power conversion efficiency.Together,OCNet offers a unified and scalable foundation for virtual characterization of organic materials across multiple scales,with broad applicability in photovoltaics,displays,and sensing.展开更多
基金Supported by the National Natural Science Foundation of China (62277014)the National Key Research and Development Program of China (2020YFC1523100)the Fundamental Research Funds for the Central Universities of China (PA2023GDSK0047)。
文摘Background Considerable research has been conducted in the areas of audio-driven virtual character gestures and facial animation with some degree of success.However,few methods exist for generating full-body animations,and the portability of virtual character gestures and facial animations has not received sufficient attention.Methods Therefore,we propose a deep-learning-based audio-to-animation-and-blendshape(Audio2AB)network that generates gesture animations and ARK it's 52 facial expression parameter blendshape weights based on audio,audio-corresponding text,emotion labels,and semantic relevance labels to generate parametric data for full-body animations.This parameterization method can be used to drive full-body animations of virtual characters and improve their portability.In the experiment,we first downsampled the gesture and facial data to achieve the same temporal resolution for the input,output,and facial data.The Audio2AB network then encoded the audio,audio-corresponding text,emotion labels,and semantic relevance labels,and then fused the text,emotion labels,and semantic relevance labels into the audio to obtain better audio features.Finally,we established links between the body,gestures,and facial decoders and generated the corresponding animation sequences through our proposed GAN-GF loss function.Results By using audio,audio-corresponding text,and emotional and semantic relevance labels as input,the trained Audio2AB network could generate gesture animation data containing blendshape weights.Therefore,different 3D virtual character animations could be created through parameterization.Conclusions The experimental results showed that the proposed method could generate significant gestures and facial animations.
文摘A virtual character is a design of a fictitious creature with distinctive characteristics created by people,and Disney virtual characters are those Intellectual Property(IP)images that appeared in Disneyland and Disney movies.This investigation aimed to explore why many younger females are keen to spend money on Disney virtual characters.This paper adopted the Marketing Mix Theory strategy(product,price,placement,and promotion(4P)),and the SWOT analysis method has been utilized.This paper investigated the relationship between the 4Ps and consumers’purchasing intentions,and it turned out that unique design and effective promotion in this Disney case would promote consumers’purchase intention,while the higher price and less accessible placements affected their purchase intentions.Thus,the high price and limited places somewhat inhibit customers’desire to buy;due to the attractiveness of the product itself and the promotion on the internet,the target consumers are still willing to consume.
基金supported in part by NSFC’s Major Research Project 92270001Z.Z.’s work is supported in part by the Beijing Nova Program(20250484934).
文摘The rational design of organic functional devices relies on understanding structure-propertyperformance relationships through multi-scale characterization.However,traditional characterizations are costly and require multidisciplinary expertise.Here we present OCNet,a domain-knowledge-enhanced representation learning framework that,for the first time,enables unified virtual characterization from molecules to devices.Pre-trained on over ten million selfgenerated conjugated molecules and dimers,OCNet learns generalizable microscopic representations comparable to expert-crafted features.As a result,it surpasses state-of-the-art models by over 20%in predicting key computed and experimental molecular optoelectronic properties.OCNet further provides the first transferable model for predicting transfer integrals in thin films,enabling accurate mesoscale carrier mobility estimation via multiscale simulations.By integrating tight-binding-level electronic descriptors,OCNet achieves near real-time,accurate prediction of device power conversion efficiency.Together,OCNet offers a unified and scalable foundation for virtual characterization of organic materials across multiple scales,with broad applicability in photovoltaics,displays,and sensing.