The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,wi...The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,with applications such as the gravity-only aerial deployment of high-aspect-ratio solar-powered UAVs,and aerial takeoff of fixed-wing drones in Mars research.However,the significant morphological changes during deployment are accompanied by strong nonlinear dynamic aerodynamic forces,which result in multiple degrees of freedom and an unstable character.This hinders the description and analysis of unknown dynamic behaviors,further leading to difficulties in the design of deployment strategies and flight control.To address this issue,this paper proposes an analysis method for dynamic behaviors during aerial deployment based on the Variational Autoencoder(VAE).Focusing on the gravity-only deployment problem of highaspect-ratio foldable-wing UAVs,the method encodes the multi-degree-of-freedom unstable motion signals into a low-dimensional feature space through a data-driven approach.By clustering in the feature space,this paper identifies and studies several dynamic behaviors during aerial deployment.The research presented in this paper offers a new method and perspective for feature extraction and analysis of complex and difficult-to-describe extreme flight dynamics,guiding the research on aerial deployment drones design and control strategies.展开更多
Dynamic sign language recognition holds significant importance, particularly with the application of deep learning to address its complexity. However, existing methods face several challenges. Firstly, recognizing dyn...Dynamic sign language recognition holds significant importance, particularly with the application of deep learning to address its complexity. However, existing methods face several challenges. Firstly, recognizing dynamic sign language requires identifying keyframes that best represent the signs, and missing these keyframes reduces accuracy. Secondly, some methods do not focus enough on hand regions, which are small within the overall frame, leading to information loss. To address these challenges, we propose a novel Video Transformer Attention-based Network (VTAN) for dynamic sign language recognition. Our approach prioritizes informative frames and hand regions effectively. To tackle the first issue, we designed a keyframe extraction module enhanced by a convolutional autoencoder, which focuses on selecting information-rich frames and eliminating redundant ones from the video sequences. For the second issue, we developed a soft attention-based transformer module that emphasizes extracting features from hand regions, ensuring that the network pays more attention to hand information within sequences. This dual-focus approach improves effective dynamic sign language recognition by addressing the key challenges of identifying critical frames and emphasizing hand regions. Experimental results on two public benchmark datasets demonstrate the effectiveness of our network, outperforming most of the typical methods in sign language recognition tasks.展开更多
A new method based on adaptive Hessian matrix threshold of finding key SRUF ( speeded up robust features) features is proposed and is applied to an unmanned vehicle for its dynamic object recognition and guided navi...A new method based on adaptive Hessian matrix threshold of finding key SRUF ( speeded up robust features) features is proposed and is applied to an unmanned vehicle for its dynamic object recognition and guided navigation. First, the object recognition algorithm based on SURF feature matching for unmanned vehicle guided navigation is introduced. Then, the standard local invariant feature extraction algorithm SRUF is analyzed, the Hessian Metrix is especially discussed, and a method of adaptive Hessian threshold is proposed which is based on correct matching point pairs threshold feedback under a close loop frame. At last, different dynamic object recognition experi- ments under different weather light conditions are discussed. The experimental result shows that the key SURF feature abstract algorithm and the dynamic object recognition method can be used for un- manned vehicle systems.展开更多
In this paper, a learning and recognition approach is proposed for univariate time series composed of output measurements of general nonlinear dynamical systems. Firstly, a class of dynamical systems in the canonical ...In this paper, a learning and recognition approach is proposed for univariate time series composed of output measurements of general nonlinear dynamical systems. Firstly, a class of dynamical systems in the canonical form is derived to describe the univariate time series by introducing coordinate transformation. An observer-based deterministic learning technique is then adopted to achieve dynamical modeling of the associated transformed systems of the training univariate time series, and the modeling results in the form of radial basis function network (RBFN) models are stored in a pattern library. Subsequently, multiple observer-based dynamical estimators containing the RBFN models in the pattern library are constructed for a test univariate time series, and a recognition decision scheme is proposed by the derived recognition indicator. On this basis, more concise recognition conditions are provided, which is beneficial for verifying the recognition results. Finally, simulation studies on the Rossler system and aero-engine stall warning verify the effectiveness of the proposed approach.展开更多
Gesture recognition technology enables machines to read human gestures and has significant application prospects in the fields of human-computer interaction and sign language translation.Existing researches usually us...Gesture recognition technology enables machines to read human gestures and has significant application prospects in the fields of human-computer interaction and sign language translation.Existing researches usually use convolutional neural networks to extract features directly from raw gesture data for gesture recognition,but the networks are affected by much interference information in the input data and thus fit to some unimportant features.In this paper,we proposed a novel method for encoding spatio-temporal information,which can enhance the key features required for gesture recognition,such as shape,structure,contour,position and hand motion of gestures,thereby improving the accuracy of gesture recognition.This encoding method can encode arbitrarily multiple frames of gesture data into a single frame of the spatio-temporal feature map and use the spatio-temporal feature map as the input to the neural network.This can guide the model to fit important features while avoiding the use of complex recurrent network structures to extract temporal features.In addition,we designed two sub-networks and trained the model using a sub-network pre-training strategy that trains the sub-networks first and then the entire network,so as to avoid the subnetworks focusing too much on the information of a single category feature and being overly influenced by each other’s features.Experimental results on two public gesture datasets show that the proposed spatio-temporal information encoding method achieves advanced accuracy.展开更多
In today’s information age,video data,as an important carrier of information,is growing explosively in terms of production volume.The quick and accurate extraction of useful information from massive video data has be...In today’s information age,video data,as an important carrier of information,is growing explosively in terms of production volume.The quick and accurate extraction of useful information from massive video data has become a focus of research in the field of computer vision.AI dynamic recognition technology has become one of the key technologies to address this issue due to its powerful data processing capabilities and intelligent recognition functions.Based on this,this paper first elaborates on the development of intelligent video AI dynamic recognition technology,then proposes several optimization strategies for intelligent video AI dynamic recognition technology,and finally analyzes the performance of intelligent video AI dynamic recognition technology for reference.展开更多
The use of hand gestures can be the most intuitive human-machine interaction medium.The early approaches for hand gesture recognition used device-based methods.These methods use mechanical or optical sensors attached ...The use of hand gestures can be the most intuitive human-machine interaction medium.The early approaches for hand gesture recognition used device-based methods.These methods use mechanical or optical sensors attached to a glove or markers,which hinder the natural human-machine communication.On the other hand,vision-based methods are less restrictive and allow for a more spontaneous communication without the need of an intermediary between human and machine.Therefore,vision gesture recognition has been a popular area of research for the past thirty years.Hand gesture recognition finds its application in many areas,particularly the automotive industry where advanced automotive human-machine interface(HMI)designers are using gesture recognition to improve driver and vehicle safety.However,technology advances go beyond active/passive safety and into convenience and comfort.In this context,one of America’s big three automakers has partnered with the Centre of Pattern Analysis and Machine Intelligence(CPAMI)at the University of Waterloo to investigate expanding their product segment through machine learning to provide an increased driver convenience and comfort with the particular application of hand gesture recognition for autonomous car parking.The present paper leverages the state-of-the-art deep learning and optimization techniques to develop a vision-based multiview dynamic hand gesture recognizer for a self-parking system.We propose a 3D-CNN gesture model architecture that we train on a publicly available hand gesture database.We apply transfer learning methods to fine-tune the pre-trained gesture model on custom-made data,which significantly improves the proposed system performance in a real world environment.We adapt the architecture of end-to-end solution to expand the state-of-the-art video classifier from a single image as input(fed by monocular camera)to a Multiview 360 feed,offered by a six cameras module.Finally,we optimize the proposed solution to work on a limited resource embedded platform(Nvidia Jetson TX2)that is used by automakers for vehicle-based features,without sacrificing the accuracy robustness and real time functionality of the system.展开更多
The efficientseparation of C_(3)H_(6) and C_(3)H_(8) is a key challenge in the petrochemical industry.A zinc-based flexible metal—organic framework(Zn—anthracenedicarboxylic acid(ADC)—triazole(TRZ))was designed thr...The efficientseparation of C_(3)H_(6) and C_(3)H_(8) is a key challenge in the petrochemical industry.A zinc-based flexible metal—organic framework(Zn—anthracenedicarboxylic acid(ADC)—triazole(TRZ))was designed through dual ligand construction.The material forms a two-dimensional layered structure via TRZ ligands,with ADC ligands serving as interlayer pillars to construct a three-dimensional pillarlayered structure,combining the stability of rigid aromatic rings with the dynamic responsiveness of flexible structures.The flexible pores of Zn—ADC—TRZ can be reversibly opened and closed under the thermal effect,and the adsorption capacity and the opening pressure of the gas can be adjusted with the increase of temperature,thereby enabling achieve the best separation effect under different partial pressures of the gas.Specifically,temperature modulation leads to increase the opening pressure of Zn—ADC—TRZ,enabling significantadsorption difference between C_(3)H_(6) and C_(3)H_(8).At 313 K and 50 kPa,Zn—ADC—TRZ achieves the highest adsorption ratio(24)of C_(3)H_(6) and C_(3)H_(8) while maintaining substantial C_(3)H_(6) adsorption capacity,thereby facilitating efficientseparation of equimolar gases.This work demonstrates the potential of temperature-responsive flexible metal—organic frameworks for energyefficientolefinpurification,offering novel insights into low-energy consumption separation technology.展开更多
With the rapid advancement of virtual reality,dynamic gesture recognition technology has become an indispensable and critical technique for users to achieve human–computer interaction in virtual environments.The reco...With the rapid advancement of virtual reality,dynamic gesture recognition technology has become an indispensable and critical technique for users to achieve human–computer interaction in virtual environments.The recognition of dynamic gestures is a challenging task due to the high degree of freedom and the influence of individual differences and the change of gesture space.To solve the problem of low recognition accuracy of existing networks,an improved dynamic gesture recognition algorithm based on ResNeXt architecture is proposed.The algorithm employs three-dimensional convolution techniques to effectively capture the spatiotemporal features intrinsic to dynamic gestures.Additionally,to enhance the model’s focus and improve its accuracy in identifying dynamic gestures,a lightweight convolutional attention mechanism is introduced.This mechanism not only augments the model’s precision but also facilitates faster convergence during the training phase.In order to further optimize the performance of the model,a deep attention submodule is added to the convolutional attention mechanism module to strengthen the network’s capability in temporal feature extraction.Empirical evaluations on EgoGesture and NvGesture datasets show that the accuracy of the proposed model in dynamic gesture recognition reaches 95.03%and 86.21%,respectively.When operating in RGB mode,the accuracy reached 93.49%and 80.22%,respectively.These results underscore the effectiveness of the proposed algorithm in recognizing dynamic gestures with high accuracy,showcasing its potential for applications in advanced human–computer interaction systems.展开更多
Gesture recognition utilizing flexible strain sensors is a highly valuable technology widely applied in human-machine interfaces.However,achieving rapid detection of subtle motions and timely processing of dynamic sig...Gesture recognition utilizing flexible strain sensors is a highly valuable technology widely applied in human-machine interfaces.However,achieving rapid detection of subtle motions and timely processing of dynamic signals remain a challenge for sensors.Here,highly resilient and durable ionogels are developed by introducing micro-scale incompatible phases in macroscopic homogeneous polymeric network.The compatible network disperses in conductive ionic liquid to form highly resilient and stretchable skeleton,while incompatible phase forms hydrogen bonds to dissipate energy thus strengthening the ionogels.The ionogels-derived strain sensors show highly sensitivity,fast response time(<10 ms),low detection limit(~50μm),and remarkable durability(>5000 cycles),allowing for precise monitoring of human motions.More importantly,a self-adaptive recognition program empowered by deep-learning algorithms is designed to compensate for sensors,creating a comprehensive system capable of dynamic gesture recognition.This system can comprehensively analyze both the temporal and spatial features of sensor data,enabling deeper understanding of the dynamic process underlying gestures.The system accurately classifies 10 hand gestures across five participants with impressive accuracy of 93.66%.Moreover,it maintains robust recognition performance without the need for further training even when different sensors or subjects are involved.This technological breakthrough paves the way for intuitive and seamless interaction between humans and machines,presenting significant opportunities in diverse applications,such as human-robot interaction,virtual reality control,and assistive devices for the disabled individuals.展开更多
Timely and accurate acquisition of crop distribution and planting area information is important for making agricultural planning and management decisions.This study employed aerial imagery as a data source and machine...Timely and accurate acquisition of crop distribution and planting area information is important for making agricultural planning and management decisions.This study employed aerial imagery as a data source and machine learning as a classification tool to statically and dynamically identify crops over an agricultural cropping area.Comparative analysis of pixel-based and object-based classifications was performed and classification results were further refined based on three types of object features(layer spectral,geometry,and texture).Static recognition using layer spectral features had the highest accuracy of 75.4%in object-based classification,and dynamic recognition had the highest accuracy of 88.0%in object-based classification based on layer spectral and geometry features.Dynamic identification could not only attenuate the effects of variations on planting dates and plant growth conditions on the results,but also amplify the differences between different features.Object-based classification produced better results than pixel-based classification,and the three feature sets(layer spectral alone,layer spectral and geometry,and all three)resulted in only small differences in accuracy in object-based classification.Dynamic recognition combined with objectbased classification using layer spectral and geometry features could effectively improve crop classification accuracy with high resolution aerial imagery.The methodologies and results from this study should provide practical guidance for crop identification and other agricultural mapping applications.展开更多
基金co-supported by the Natural Science Basic Research Program of Shaanxi,China(No.2023-JC-QN-0043)the ND Basic Research Funds,China(No.G2022WD).
文摘The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,with applications such as the gravity-only aerial deployment of high-aspect-ratio solar-powered UAVs,and aerial takeoff of fixed-wing drones in Mars research.However,the significant morphological changes during deployment are accompanied by strong nonlinear dynamic aerodynamic forces,which result in multiple degrees of freedom and an unstable character.This hinders the description and analysis of unknown dynamic behaviors,further leading to difficulties in the design of deployment strategies and flight control.To address this issue,this paper proposes an analysis method for dynamic behaviors during aerial deployment based on the Variational Autoencoder(VAE).Focusing on the gravity-only deployment problem of highaspect-ratio foldable-wing UAVs,the method encodes the multi-degree-of-freedom unstable motion signals into a low-dimensional feature space through a data-driven approach.By clustering in the feature space,this paper identifies and studies several dynamic behaviors during aerial deployment.The research presented in this paper offers a new method and perspective for feature extraction and analysis of complex and difficult-to-describe extreme flight dynamics,guiding the research on aerial deployment drones design and control strategies.
基金supported by the National Natural Science Foundation of China under Grant Nos.62076117 and 62166026the Jiangxi Provincial Key Laboratory of Virtual Reality under Grant No.2024SSY03151.
文摘Dynamic sign language recognition holds significant importance, particularly with the application of deep learning to address its complexity. However, existing methods face several challenges. Firstly, recognizing dynamic sign language requires identifying keyframes that best represent the signs, and missing these keyframes reduces accuracy. Secondly, some methods do not focus enough on hand regions, which are small within the overall frame, leading to information loss. To address these challenges, we propose a novel Video Transformer Attention-based Network (VTAN) for dynamic sign language recognition. Our approach prioritizes informative frames and hand regions effectively. To tackle the first issue, we designed a keyframe extraction module enhanced by a convolutional autoencoder, which focuses on selecting information-rich frames and eliminating redundant ones from the video sequences. For the second issue, we developed a soft attention-based transformer module that emphasizes extracting features from hand regions, ensuring that the network pays more attention to hand information within sequences. This dual-focus approach improves effective dynamic sign language recognition by addressing the key challenges of identifying critical frames and emphasizing hand regions. Experimental results on two public benchmark datasets demonstrate the effectiveness of our network, outperforming most of the typical methods in sign language recognition tasks.
基金Supported by the National Natural Science Foundation of China(61103157)Beijing Municipal Education Commission Project(SQKM201311417010)
文摘A new method based on adaptive Hessian matrix threshold of finding key SRUF ( speeded up robust features) features is proposed and is applied to an unmanned vehicle for its dynamic object recognition and guided navigation. First, the object recognition algorithm based on SURF feature matching for unmanned vehicle guided navigation is introduced. Then, the standard local invariant feature extraction algorithm SRUF is analyzed, the Hessian Metrix is especially discussed, and a method of adaptive Hessian threshold is proposed which is based on correct matching point pairs threshold feedback under a close loop frame. At last, different dynamic object recognition experi- ments under different weather light conditions are discussed. The experimental result shows that the key SURF feature abstract algorithm and the dynamic object recognition method can be used for un- manned vehicle systems.
基金supported by the National Postdoctoral Researcher Program of China(No.GZC20231451)the National Natural Science Foundation of China(Nos.61890922,62203263)the Shandong Province Natural Science Foundation(Nos.ZR2020ZD40,ZR2022QF062).
文摘In this paper, a learning and recognition approach is proposed for univariate time series composed of output measurements of general nonlinear dynamical systems. Firstly, a class of dynamical systems in the canonical form is derived to describe the univariate time series by introducing coordinate transformation. An observer-based deterministic learning technique is then adopted to achieve dynamical modeling of the associated transformed systems of the training univariate time series, and the modeling results in the form of radial basis function network (RBFN) models are stored in a pattern library. Subsequently, multiple observer-based dynamical estimators containing the RBFN models in the pattern library are constructed for a test univariate time series, and a recognition decision scheme is proposed by the derived recognition indicator. On this basis, more concise recognition conditions are provided, which is beneficial for verifying the recognition results. Finally, simulation studies on the Rossler system and aero-engine stall warning verify the effectiveness of the proposed approach.
基金This work was supported,in part,by the National Nature Science Foundation of China under grant numbers 62272236in part,by the Natural Science Foundation of Jiangsu Province under grant numbers BK20201136,BK20191401in part,by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)fund.
文摘Gesture recognition technology enables machines to read human gestures and has significant application prospects in the fields of human-computer interaction and sign language translation.Existing researches usually use convolutional neural networks to extract features directly from raw gesture data for gesture recognition,but the networks are affected by much interference information in the input data and thus fit to some unimportant features.In this paper,we proposed a novel method for encoding spatio-temporal information,which can enhance the key features required for gesture recognition,such as shape,structure,contour,position and hand motion of gestures,thereby improving the accuracy of gesture recognition.This encoding method can encode arbitrarily multiple frames of gesture data into a single frame of the spatio-temporal feature map and use the spatio-temporal feature map as the input to the neural network.This can guide the model to fit important features while avoiding the use of complex recurrent network structures to extract temporal features.In addition,we designed two sub-networks and trained the model using a sub-network pre-training strategy that trains the sub-networks first and then the entire network,so as to avoid the subnetworks focusing too much on the information of a single category feature and being overly influenced by each other’s features.Experimental results on two public gesture datasets show that the proposed spatio-temporal information encoding method achieves advanced accuracy.
文摘In today’s information age,video data,as an important carrier of information,is growing explosively in terms of production volume.The quick and accurate extraction of useful information from massive video data has become a focus of research in the field of computer vision.AI dynamic recognition technology has become one of the key technologies to address this issue due to its powerful data processing capabilities and intelligent recognition functions.Based on this,this paper first elaborates on the development of intelligent video AI dynamic recognition technology,then proposes several optimization strategies for intelligent video AI dynamic recognition technology,and finally analyzes the performance of intelligent video AI dynamic recognition technology for reference.
文摘The use of hand gestures can be the most intuitive human-machine interaction medium.The early approaches for hand gesture recognition used device-based methods.These methods use mechanical or optical sensors attached to a glove or markers,which hinder the natural human-machine communication.On the other hand,vision-based methods are less restrictive and allow for a more spontaneous communication without the need of an intermediary between human and machine.Therefore,vision gesture recognition has been a popular area of research for the past thirty years.Hand gesture recognition finds its application in many areas,particularly the automotive industry where advanced automotive human-machine interface(HMI)designers are using gesture recognition to improve driver and vehicle safety.However,technology advances go beyond active/passive safety and into convenience and comfort.In this context,one of America’s big three automakers has partnered with the Centre of Pattern Analysis and Machine Intelligence(CPAMI)at the University of Waterloo to investigate expanding their product segment through machine learning to provide an increased driver convenience and comfort with the particular application of hand gesture recognition for autonomous car parking.The present paper leverages the state-of-the-art deep learning and optimization techniques to develop a vision-based multiview dynamic hand gesture recognizer for a self-parking system.We propose a 3D-CNN gesture model architecture that we train on a publicly available hand gesture database.We apply transfer learning methods to fine-tune the pre-trained gesture model on custom-made data,which significantly improves the proposed system performance in a real world environment.We adapt the architecture of end-to-end solution to expand the state-of-the-art video classifier from a single image as input(fed by monocular camera)to a Multiview 360 feed,offered by a six cameras module.Finally,we optimize the proposed solution to work on a limited resource embedded platform(Nvidia Jetson TX2)that is used by automakers for vehicle-based features,without sacrificing the accuracy robustness and real time functionality of the system.
基金financial support from the National Natural Science Foundation of China(22422810,22090062,22278288)Natural Science Foundation of Shanxi Province(202203021223004).
文摘The efficientseparation of C_(3)H_(6) and C_(3)H_(8) is a key challenge in the petrochemical industry.A zinc-based flexible metal—organic framework(Zn—anthracenedicarboxylic acid(ADC)—triazole(TRZ))was designed through dual ligand construction.The material forms a two-dimensional layered structure via TRZ ligands,with ADC ligands serving as interlayer pillars to construct a three-dimensional pillarlayered structure,combining the stability of rigid aromatic rings with the dynamic responsiveness of flexible structures.The flexible pores of Zn—ADC—TRZ can be reversibly opened and closed under the thermal effect,and the adsorption capacity and the opening pressure of the gas can be adjusted with the increase of temperature,thereby enabling achieve the best separation effect under different partial pressures of the gas.Specifically,temperature modulation leads to increase the opening pressure of Zn—ADC—TRZ,enabling significantadsorption difference between C_(3)H_(6) and C_(3)H_(8).At 313 K and 50 kPa,Zn—ADC—TRZ achieves the highest adsorption ratio(24)of C_(3)H_(6) and C_(3)H_(8) while maintaining substantial C_(3)H_(6) adsorption capacity,thereby facilitating efficientseparation of equimolar gases.This work demonstrates the potential of temperature-responsive flexible metal—organic frameworks for energyefficientolefinpurification,offering novel insights into low-energy consumption separation technology.
文摘With the rapid advancement of virtual reality,dynamic gesture recognition technology has become an indispensable and critical technique for users to achieve human–computer interaction in virtual environments.The recognition of dynamic gestures is a challenging task due to the high degree of freedom and the influence of individual differences and the change of gesture space.To solve the problem of low recognition accuracy of existing networks,an improved dynamic gesture recognition algorithm based on ResNeXt architecture is proposed.The algorithm employs three-dimensional convolution techniques to effectively capture the spatiotemporal features intrinsic to dynamic gestures.Additionally,to enhance the model’s focus and improve its accuracy in identifying dynamic gestures,a lightweight convolutional attention mechanism is introduced.This mechanism not only augments the model’s precision but also facilitates faster convergence during the training phase.In order to further optimize the performance of the model,a deep attention submodule is added to the convolutional attention mechanism module to strengthen the network’s capability in temporal feature extraction.Empirical evaluations on EgoGesture and NvGesture datasets show that the accuracy of the proposed model in dynamic gesture recognition reaches 95.03%and 86.21%,respectively.When operating in RGB mode,the accuracy reached 93.49%and 80.22%,respectively.These results underscore the effectiveness of the proposed algorithm in recognizing dynamic gestures with high accuracy,showcasing its potential for applications in advanced human–computer interaction systems.
基金supported by the National Key Research and Development Program of China(No.2021YFA1401103)the National Natural Science Foundation of China(Nos.61825403,61921005,and 82370520).
文摘Gesture recognition utilizing flexible strain sensors is a highly valuable technology widely applied in human-machine interfaces.However,achieving rapid detection of subtle motions and timely processing of dynamic signals remain a challenge for sensors.Here,highly resilient and durable ionogels are developed by introducing micro-scale incompatible phases in macroscopic homogeneous polymeric network.The compatible network disperses in conductive ionic liquid to form highly resilient and stretchable skeleton,while incompatible phase forms hydrogen bonds to dissipate energy thus strengthening the ionogels.The ionogels-derived strain sensors show highly sensitivity,fast response time(<10 ms),low detection limit(~50μm),and remarkable durability(>5000 cycles),allowing for precise monitoring of human motions.More importantly,a self-adaptive recognition program empowered by deep-learning algorithms is designed to compensate for sensors,creating a comprehensive system capable of dynamic gesture recognition.This system can comprehensively analyze both the temporal and spatial features of sensor data,enabling deeper understanding of the dynamic process underlying gestures.The system accurately classifies 10 hand gestures across five participants with impressive accuracy of 93.66%.Moreover,it maintains robust recognition performance without the need for further training even when different sensors or subjects are involved.This technological breakthrough paves the way for intuitive and seamless interaction between humans and machines,presenting significant opportunities in diverse applications,such as human-robot interaction,virtual reality control,and assistive devices for the disabled individuals.
基金supported by the National Key Research and Development Program(No.2020YFD1100204)the Provincial Key Basic Research Project(No.2019AB002).
文摘Timely and accurate acquisition of crop distribution and planting area information is important for making agricultural planning and management decisions.This study employed aerial imagery as a data source and machine learning as a classification tool to statically and dynamically identify crops over an agricultural cropping area.Comparative analysis of pixel-based and object-based classifications was performed and classification results were further refined based on three types of object features(layer spectral,geometry,and texture).Static recognition using layer spectral features had the highest accuracy of 75.4%in object-based classification,and dynamic recognition had the highest accuracy of 88.0%in object-based classification based on layer spectral and geometry features.Dynamic identification could not only attenuate the effects of variations on planting dates and plant growth conditions on the results,but also amplify the differences between different features.Object-based classification produced better results than pixel-based classification,and the three feature sets(layer spectral alone,layer spectral and geometry,and all three)resulted in only small differences in accuracy in object-based classification.Dynamic recognition combined with objectbased classification using layer spectral and geometry features could effectively improve crop classification accuracy with high resolution aerial imagery.The methodologies and results from this study should provide practical guidance for crop identification and other agricultural mapping applications.