A solution of virtual human skeleton system is proposed. Some issues on integration of anatomical geometry, biodynamics and computer animation are studied. The detailed skeleton system model that incorporates the biod...A solution of virtual human skeleton system is proposed. Some issues on integration of anatomical geometry, biodynamics and computer animation are studied. The detailed skeleton system model that incorporates the biodynamic and geometric characteristics of a human skeleton system allows some performance studies in greater detail than that performed before. It may provide an effective and convenient way to analyze and evaluate the movement performance of a human body when the personalized anatomical data are used in the models. An example shows that the proposed solution is effective for the stated problems.展开更多
Multimodal-based action recognition methods have achieved high success using pose and RGB modality.However,skeletons sequences lack appearance depiction and RGB images suffer irrelevant noise due to modality limitatio...Multimodal-based action recognition methods have achieved high success using pose and RGB modality.However,skeletons sequences lack appearance depiction and RGB images suffer irrelevant noise due to modality limitations.To address this,the authors introduce human parsing feature map as a novel modality,since it can selectively retain effective semantic features of the body parts while filtering out most irrelevant noise.The authors propose a new dual-branch framework called ensemble human parsing and pose network(EPP-Net),which is the first to leverage both skeletons and human parsing modalities for action recognition.The first human pose branch feeds robust skeletons in the graph convolutional network to model pose features,while the second human parsing branch also leverages depictive parsing feature maps to model parsing features via convolutional backbones.The two high-level features will be effectively combined through a late fusion strategy for better action recognition.Extensive experiments on NTU RGB t D and NTU RGB t D 120 benchmarks consistently verify the effectiveness of our proposed EPP-Net,which outperforms the existing action recognition methods.Our code is available at https://github.com/liujf69/EPP-Net-Action.展开更多
<b><span style="font-family:;" "="">Aim:</span></b><span><span><span style="font-family:;" "=""> To perform a vector 3D recon...<b><span style="font-family:;" "="">Aim:</span></b><span><span><span style="font-family:;" "=""> To perform a vector 3D reconstruction of the neck skeleton from the anatomical sections of the “Korean Visible Human” for educational purposes. <b>Material and Methods: </b>The anatomical subject was a 33-year-old Korean male who died of leukemia. It measured 164 cm and weighed 55</span></span></span><span><span><span style="font-family:;" "=""> </span></span></span><span><span><span style="font-family:;" "="">kgs.</span></span></span><span><span><span style="font-family:;" "=""> </span></span></span><span><span><span style="font-family:;" "="">The anatomical cuts were made in 2010 after an MRI and a CT scan. A special saw (cryomacrotome) made it possible to make cuts on the frozen body of 0.2 mm thick or 5960 slices. Sections numbered 1500 to 2000 (500 neck sections) were used for this study. Manual contouring segmentation of each anatomical element of the anterior neck area was done using Winsurf software version 3.5 on a PC. <b>Results</b>: Our vector 3D neck model includes the following: cervical vertebrae, hyoid bone, sternum manubrium and clavicles. This vector model has been integrated into the virtual dissection table</span></span></span><span><span><span style="font-family:;" "=""> </span></span></span><span><span><span style="font-family:;" "="">Diva3d, a new educational tool used by universities and medical schools to learn anatomy. This model was also put online on the Sketchfab website and printed in 3D using an ENDER 3 printer. <b>Conclusion:</b> This original work is a remarkable educational tool for the study of the skeleton of the neck and can also serve as a 3D atlas for simulation purposes for training therapeutic gestures.</span></span></span>展开更多
In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accurac...In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods.展开更多
文摘A solution of virtual human skeleton system is proposed. Some issues on integration of anatomical geometry, biodynamics and computer animation are studied. The detailed skeleton system model that incorporates the biodynamic and geometric characteristics of a human skeleton system allows some performance studies in greater detail than that performed before. It may provide an effective and convenient way to analyze and evaluate the movement performance of a human body when the personalized anatomical data are used in the models. An example shows that the proposed solution is effective for the stated problems.
基金National Natural Science Foundation of China,Grant/Award Number:62203476Natural Science Foundation of Guangdong Province,Grant/Award Number:2024A1515012089+1 种基金Natural Science Foundation of Shenzhen,Grant/Award Number:JCYJ20230807120801002Shenzhen Innovation in Science and Technology Foundation for The Excellent Youth Scholars,Grant/Award Number:RCYX20231211090248064。
文摘Multimodal-based action recognition methods have achieved high success using pose and RGB modality.However,skeletons sequences lack appearance depiction and RGB images suffer irrelevant noise due to modality limitations.To address this,the authors introduce human parsing feature map as a novel modality,since it can selectively retain effective semantic features of the body parts while filtering out most irrelevant noise.The authors propose a new dual-branch framework called ensemble human parsing and pose network(EPP-Net),which is the first to leverage both skeletons and human parsing modalities for action recognition.The first human pose branch feeds robust skeletons in the graph convolutional network to model pose features,while the second human parsing branch also leverages depictive parsing feature maps to model parsing features via convolutional backbones.The two high-level features will be effectively combined through a late fusion strategy for better action recognition.Extensive experiments on NTU RGB t D and NTU RGB t D 120 benchmarks consistently verify the effectiveness of our proposed EPP-Net,which outperforms the existing action recognition methods.Our code is available at https://github.com/liujf69/EPP-Net-Action.
文摘<b><span style="font-family:;" "="">Aim:</span></b><span><span><span style="font-family:;" "=""> To perform a vector 3D reconstruction of the neck skeleton from the anatomical sections of the “Korean Visible Human” for educational purposes. <b>Material and Methods: </b>The anatomical subject was a 33-year-old Korean male who died of leukemia. It measured 164 cm and weighed 55</span></span></span><span><span><span style="font-family:;" "=""> </span></span></span><span><span><span style="font-family:;" "="">kgs.</span></span></span><span><span><span style="font-family:;" "=""> </span></span></span><span><span><span style="font-family:;" "="">The anatomical cuts were made in 2010 after an MRI and a CT scan. A special saw (cryomacrotome) made it possible to make cuts on the frozen body of 0.2 mm thick or 5960 slices. Sections numbered 1500 to 2000 (500 neck sections) were used for this study. Manual contouring segmentation of each anatomical element of the anterior neck area was done using Winsurf software version 3.5 on a PC. <b>Results</b>: Our vector 3D neck model includes the following: cervical vertebrae, hyoid bone, sternum manubrium and clavicles. This vector model has been integrated into the virtual dissection table</span></span></span><span><span><span style="font-family:;" "=""> </span></span></span><span><span><span style="font-family:;" "="">Diva3d, a new educational tool used by universities and medical schools to learn anatomy. This model was also put online on the Sketchfab website and printed in 3D using an ENDER 3 printer. <b>Conclusion:</b> This original work is a remarkable educational tool for the study of the skeleton of the neck and can also serve as a 3D atlas for simulation purposes for training therapeutic gestures.</span></span></span>
基金supported by the National Natural Science Foundation of China(62272049,62236006,62172045)the Key Projects of Beijing Union University(ZKZD202301).
文摘In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods.