There are all kinds of unknown and known signals in the actual electromagnetic environment,which hinders the development of practical cognitive radio applications.However,most existing signal recognition models are di...There are all kinds of unknown and known signals in the actual electromagnetic environment,which hinders the development of practical cognitive radio applications.However,most existing signal recognition models are difficult to discover unknown signals while recognizing known ones.In this paper,a compact manifold mixup feature-based open-set recognition approach(OR-CMMF)is proposed to address the above problem.First,the proposed approach utilizes the center loss to constrain decision boundaries so that it obtains the compact latent signal feature representations and extends the low-confidence feature space.Second,the latent signal feature representations are used to construct synthetic representations as substitutes for unknown categories of signals.Then,these constructed representations can occupy the extended low-confidence space.Finally,the proposed approach applies the distillation loss to adjust the decision boundaries between the known categories signals and the constructed unknown categories substitutes so that it accurately discovers unknown signals.The OR-CMMF approach outperformed other state-of-the-art open-set recognition methods in comprehensive recognition performance and running time,as demonstrated by simulation experiments on two public datasets RML2016.10a and ORACLE.展开更多
In the traditional pattern classification method,it usually assumes that the object to be classified must lie in one of given(known)classes of the training data set.However,the training data set may not contain the cl...In the traditional pattern classification method,it usually assumes that the object to be classified must lie in one of given(known)classes of the training data set.However,the training data set may not contain the class of some objects in practice,and this is considered as an Open-Set Recognition(OSR)problem.In this paper,we propose a new progressive open-set recognition method with adaptive probability threshold.Both the labeled training data and the test data(objects to be classified)are put into a common data set,and the k-Nearest Neighbors(k-NNs)of each object are sought in this common set.Then,we can determine the probability of object lying in the given classes.If the majority of k-NNs of the object are from labeled training data,this object quite likely belongs to one of the given classes,and the density of the object and its neighbors is taken into account here.However,when most of k-NNs are from the unlabeled test data set,the class of object is considered very uncertain because the class of test data is unknown,and this object cannot be classified in this step.Once the objects belonging to known classes with high probability are all found,we re-calculate the probability of the other uncertain objects belonging to known classes based on the labeled training data and the objects marked with the estimated probability.Such iteration will stop when the probabilities of all the objects belonging to known classes are not changed.Then,a modified Otsu’s method is employed to adaptively seek the probability threshold for the final classification.If the probability of object belonging to known classes is smaller than this threshold,it will be assigned to the ignorant(unknown)class that is not included in training data set.The other objects will be committed to a specific class.The effectiveness of the proposed method has been validated using some experiments.展开更多
The advancement of wearable sensing technologies demands multifunctional materials that integrate high sensitivity,environmental resilience,and intelligent signal processing.In this work,a flexible hydrophobic conduct...The advancement of wearable sensing technologies demands multifunctional materials that integrate high sensitivity,environmental resilience,and intelligent signal processing.In this work,a flexible hydrophobic conductive yarn(FCB@SY)featuring a controllable microcrack structure is developed via a synergistic approach combining ultrasonic swelling and non-solvent induced phase separation(NIPS).By embedding a robust conductive network and engineering microcrack morphology,the resulting sensor achieves an ultrahigh gauge factor(GF≈12,670),an ultrabroad working range(0%-547%),a low detection limit(0.5%),rapid response/recovery time(140 ms/140 ms),and outstanding durability over 10,000 cycles.Furthermore,the hydrophobic surface endowed by conductive coatings imparts exceptional chemical stability against acidic and alkaline environments,as well as reliable waterproof performance.This enables consistent functionality under harsh conditions,including underwater operation.Integrated with machine learning algorithms,the FCB@SY-based intelligent sensing system demonstrates dualmode capabilities in human motion tracking and gesture recognition,offering significant potential for applications in wearable electronics,human-machine interfaces,and soft robotics.展开更多
Photoresponsive memristors(i.e.,photomemristors)have been recently highly regarded to tackle data latency and energy consumption challenges in conventional Von Neumann architecture-based image recognition systems.Howe...Photoresponsive memristors(i.e.,photomemristors)have been recently highly regarded to tackle data latency and energy consumption challenges in conventional Von Neumann architecture-based image recognition systems.However,their efficacy in recognizing low-contrast images is quite limited,and while preprocessing algorithms are usually employed to enhance these images,which naturally introduce delays that hinder real-time recognition in complex conditions.To address this challenge,here we present a selfdriven polarization-sensitive ferroelectric photomemristor inspired by advanced biological systems.The proposed prototype device is engineered to extract image polarization information,enabling real-time and in-situ enhanced image recognition and classification capabilities.By combining the anisotropic optical feature of the two-dimensional material(ReSe_(2))and ferroelectric polarization of singlecrystalline diisopropylammonium bromide(DIPAB)thin film,tunable and self-driven polarized responsiveness with intelligence was achieved.With remarkable optoelectronic synaptic characteristics of the fabricated device,a significant enhancement was demonstrated in recognition probability—averaging an impressive 85.9% for low-contrast scenarios,in contrast to the mere 47.5% exhibited by traditional photomemristors.This holds substantial implications for the detection and recognition of subtle information in diverse scenes such as autonomous driving,medical imaging,and astronomical observation.展开更多
An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction...An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects.展开更多
In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,...In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,existing methods face two major challenges:traditional feature engineering suffers from insufficient effective dimensionality in the feature space due to kinematic coupling,making it difficult to distinguish essential differences between maneuvers,while end-to-end deep learning models lack controllability in implicit feature learning and fail to model high-order long-range temporal dependencies.This paper proposes a trajectory feature pre-extraction method based on a Long-range Masked Autoencoder(LMAE),incorporating three key innovations:(1)Random Fragment High-ratio Masking(RFH-Mask),which enforces the model to learn long-range temporal correlations by masking 80%of trajectory data while retaining continuous fragments;(2)Kalman Filter-Guided Objective Function(KFG-OF),integrating trajectory continuity constraints to align the feature space with kinematic principles;and(3)Two-stage Decoupled Architecture,enabling efficient and controllable feature learning through unsupervised pre-training and frozen-feature transfer.Experimental results demonstrate that LMAE significantly improves the average recognition accuracy for 20-class maneuvers compared to traditional end-to-end models,while significantly accelerating convergence speed.The contributions of this work lie in:introducing high-masking-rate autoencoders into low-informationdensity trajectory analysis,proposing a feature engineering framework with enhanced controllability and efficiency,and providing a novel technical pathway for intelligent air combat decision-making systems.展开更多
Open-set recognition(OSR)is a realistic problem in wireless signal recogni-tion,which means that during the inference phase there may appear unknown classes not seen in the training phase.The method of intra-class spl...Open-set recognition(OSR)is a realistic problem in wireless signal recogni-tion,which means that during the inference phase there may appear unknown classes not seen in the training phase.The method of intra-class splitting(ICS)that splits samples of known classes to imitate unknown classes has achieved great performance.However,this approach relies too much on the predefined splitting ratio and may face huge performance degradation in new environment.In this paper,we train a multi-task learning(MTL)net-work based on the characteristics of wireless signals to improve the performance in new scenes.Besides,we provide a dynamic method to decide the splitting ratio per class to get more precise outer samples.To be specific,we make perturbations to the sample from the center of one class toward its adversarial direction and the change point of confidence scores during this process is used as the splitting threshold.We conduct several experi-ments on one wireless signal dataset collected at 2.4 GHz ISM band by LimeSDR and one open modulation recognition dataset,and the analytical results demonstrate the effective-ness of the proposed method.展开更多
In high-intensity electromagnetic warfare,radar systems are persistently subjected to multi-jammer attacks,including potentially novel unknown jamming types that may emerge exclusively under wartime conditions.These j...In high-intensity electromagnetic warfare,radar systems are persistently subjected to multi-jammer attacks,including potentially novel unknown jamming types that may emerge exclusively under wartime conditions.These jamming signals severely degrade radar detection performance.Precise recognition of these unknown and compound jamming signals is critical to enhancing the anti-jamming capabilities and overall reliability of radar systems.To address this challenge,this article proposes a novel open-set compound jamming cognition(OSCJC)method.The proposed method employs a detection-classification dual-network architecture,which not only overcomes the false alarm and misdetection issues of traditional closed-set recognition methods when dealing with unknown jamming but also effectively addresses the performance bottleneck of existing open-set recognition techniques focusing on single jamming scenarios in compound jamming environments.To achieve unknown jamming detection,we first employ a consistency labeling strategy to train the detection network using diverse known jamming samples.This strategy enables the network to acquire highly generalizable jamming features,thereby accurately localizing candidate regions for individual jamming components within compound jamming.Subsequently,we introduce contrastive learning to optimize the classification network,significantly enhancing both intra-class clustering and inter-class separability in the jamming feature space.This method not only improves the recognition accuracy of the classification network for known jamming types but also enhances its sensitivity to unknown jamming types.Simulations and experimental data are used to verify the effectiveness of the proposed OSCJC method.Compared with the state-of-the-art open-set recognition methods,the proposed method demonstrates superior recognition accuracy and enhanced environmental adaptability.展开更多
Artificial intelligence,such as deep learning technology,has advanced the study of facial expression recognition since facial expression carries rich emotional information and is significant for many naturalistic situ...Artificial intelligence,such as deep learning technology,has advanced the study of facial expression recognition since facial expression carries rich emotional information and is significant for many naturalistic situations.To pursue a high facial expression recognition accuracy,the network model of deep learning is generally designed to be very deep while the model’s real-time performance is typically constrained and limited.With MobileNetV3,a lightweight model with a good accuracy,a further study is conducted by adding a basic ResNet module to each of its existing modules and an SSH(Single Stage Headless Face Detector)context module to expand the model’s perceptual field.In this article,the enhanced model named Res-MobileNetV3,could alleviate the subpar of real-time performance and compress the size of large network models,which can process information at a rate of up to 33 frames per second.Although the improved model has been verified to be slightly inferior to the current state-of-the-art method in aspect of accuracy rate on the publically available face expression datasets,it can bring a good balance on accuracy,real-time performance,model size and model complexity in practical applications.展开更多
In the era of artificial intelligence(AI),healthcare and medical sciences are inseparable from different AI technologies[1].ChatGPT once shocked the medical field,but the latest AI model DeepSeek has recently taken th...In the era of artificial intelligence(AI),healthcare and medical sciences are inseparable from different AI technologies[1].ChatGPT once shocked the medical field,but the latest AI model DeepSeek has recently taken the lead[2].PubMed indexed publications on DeepSeek are evolving[3],but limited to editorials and news articles.In this Letter,we explore the use of DeepSeek in early symptoms recognition for stroke care.To the best of our knowledge,this is the first DeepSeek-related writing on stroke.展开更多
A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can ...A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.展开更多
As an essential field of multimedia and computer vision,3D shape recognition has attracted much research attention in recent years.Multiview-based approaches have demonstrated their superiority in generating effective...As an essential field of multimedia and computer vision,3D shape recognition has attracted much research attention in recent years.Multiview-based approaches have demonstrated their superiority in generating effective 3D shape representations.Typical methods usually extract the multiview global features and aggregate them together to generate 3D shape descriptors.However,there exist two disadvantages:First,the mainstream methods ignore the comprehensive exploration of local information in each view.Second,many approaches roughly aggregate multiview features by adding or concatenating them together.The information loss for some discriminative characteristics limits the representation effectiveness.To address these problems,a novel architecture named region-based joint attention network(RJAN)was proposed.Specifically,the authors first design a hierarchical local information exploration module for view descriptor extraction.The region-to-region and channel-to-channel relationships from different granularities can be comprehensively explored and utilised to provide more discriminative characteristics for view feature learning.Subsequently,a novel relation-aware view aggregation module is designed to aggregate the multiview features for shape descriptor generation,considering the view-to-view relationships.Extensive experiments were conducted on three public databases:ModelNet40,ModelNet10,and ShapeNetCore55.RJAN achieves state-of-the-art performance in the tasks of 3D shape classification and 3D shape retrieval,which demonstrates the effectiveness of RJAN.The code has been released on https://github.com/slurrpp/RJAN.展开更多
Bird vocalizations are pivotal for ecological monitoring,providing insights into biodiversity and ecosystem health.Traditional recognition methods often neglect phase information,resulting in incomplete feature repres...Bird vocalizations are pivotal for ecological monitoring,providing insights into biodiversity and ecosystem health.Traditional recognition methods often neglect phase information,resulting in incomplete feature representation.In this paper,we introduce a novel approach to bird vocalization recognition(BVR)that integrates both amplitude and phase information,leading to enhanced species identification.We propose MHARes Net,a deep learning(DL)model that employs residual blocks and a multi-head attention mechanism to capture salient features from logarithmic power(POW),Instantaneous Frequency(IF),and Group Delay(GD)extracted from bird vocalizations.Experiments on three bird vocalization datasets demonstrate our method's superior performance,achieving accuracy rates of 94%,98.9%,and 87.1%respectively.These results indicate that our approach provides a more effective representation of bird vocalizations,outperforming existing methods.This integration of phase information in BVR is innovative and significantly advances the field of automatic bird monitoring technology,offering valuable tools for ecological research and conservation efforts.展开更多
In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fi...In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fields,including computer games,smart homes,expression analysis,gesture recognition,surveillance films,depression therapy,patientmonitoring,anxiety,and others,have brought attention to its significant academic and commercial importance.This study emphasizes research that has only employed facial images for face expression recognition(FER),because facial expressions are a basic way that people communicate meaning to each other.The immense achievement of deep learning has resulted in a growing use of its much architecture to enhance efficiency.This review is on machine learning,deep learning,and hybrid methods’use of preprocessing,augmentation techniques,and feature extraction for temporal properties of successive frames of data.The following section gives a brief summary of assessment criteria that are accessible to the public and then compares them with benchmark results the most trustworthy way to assess FER-related research topics statistically.In this review,a brief synopsis of the subject matter may be beneficial for novices in the field of FER as well as seasoned scholars seeking fruitful avenues for further investigation.The information conveys fundamental knowledge and provides a comprehensive understanding of the most recent state-of-the-art research.展开更多
Seal authentication is an important task for verifying the authenticity of stamped seals used in various domains to protect legal documents from tampering and counterfeiting.Stamped seal inspection is commonly audited...Seal authentication is an important task for verifying the authenticity of stamped seals used in various domains to protect legal documents from tampering and counterfeiting.Stamped seal inspection is commonly audited manually to ensure document authenticity.However,manual assessment of seal images is tedious and laborintensive due to human errors,inconsistent placement,and completeness of the seal.Traditional image recognition systems are inadequate enough to identify seal types accurately,necessitating a neural network-based method for seal image recognition.However,neural network-based classification algorithms,such as Residual Networks(ResNet)andVisualGeometryGroup with 16 layers(VGG16)yield suboptimal recognition rates on stamp datasets.Additionally,the fixed training data categories make handling new categories to be a challenging task.This paper proposes amulti-stage seal recognition algorithmbased on Siamese network to overcome these limitations.Firstly,the seal image is pre-processed by applying an image rotation correction module based on Histogram of Oriented Gradients(HOG).Secondly,the similarity between input seal image pairs is measured by utilizing a similarity comparison module based on the Siamese network.Finally,we compare the results with the pre-stored standard seal template images in the database to obtain the seal type.To evaluate the performance of the proposed method,we further create a new seal image dataset that contains two subsets with 210,000 valid labeled pairs in total.The proposed work has a practical significance in industries where automatic seal authentication is essential as in legal,financial,and governmental sectors,where automatic seal recognition can enhance document security and streamline validation processes.Furthermore,the experimental results show that the proposed multi-stage method for seal image recognition outperforms state-of-the-art methods on the two established datasets.展开更多
Target occlusion poses a significant challenge in computer vision,particularly in agricultural applications,where occlusion of crops can obscure key features and impair the model’s recognition performance.To address ...Target occlusion poses a significant challenge in computer vision,particularly in agricultural applications,where occlusion of crops can obscure key features and impair the model’s recognition performance.To address this challenge,a mushroom recognition method was proposed based on an erase module integrated into the EL-DenseNet model.EL-DenseNet,an extension of DenseNet,incorporated an erase attention module designed to enhance sensitivity to visible features.The erase module helped eliminate complex backgrounds and irrelevant information,allowing the mushroom body to be preserved and increasing recognition accuracy in cluttered environments.Considering the difficulty in distinguishing similar mushroom species,label smoothing regularization was employed to mitigate mislabeling errors that commonly arose from human observers.This strategy converted hard labels into soft labels during training,reducing the model’s overreliance on noisy labels and improving its generalization ability.Experimental results showed that the proposed EL-DenseNet,when combined with transfer learning,achieved a recognition accuracy of 96.7%for mushrooms in occluded and complex backgrounds.Compared with the original DenseNet and other classic models,this approach demonstrated superior accuracy and robustness,providing a promising solution for intelligent mushroom recognition.展开更多
The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for he...The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.展开更多
In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accurac...In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods.展开更多
Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been pr...Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time.展开更多
Pointer instruments are widely used in the nuclear power industry. Addressing the issues of low accuracy and slow detection speed in recognizing pointer meter readings under varying types and distances, this paper pro...Pointer instruments are widely used in the nuclear power industry. Addressing the issues of low accuracy and slow detection speed in recognizing pointer meter readings under varying types and distances, this paper proposes a recognition method based on YOLOv8 and DeepLabv3+. To improve the image input quality of the DeepLabv3+ model, the YOLOv8 detector is used to quickly locate the instrument region and crop it as the input image for recognition. To enhance the accuracy and speed of pointer recognition, the backbone network of DeepLabv3+ was replaced with Mo-bileNetv3, and the ECA+ module was designed to replace its SE module, reducing model parameters while improving recognition precision. The decoder’s fourfold-up sampling was replaced with two twofold-up samplings, and shallow feature maps were fused with encoder features of the corresponding size. The CBAM module was introduced to improve the segmentation accuracy of the pointer. Experiments were conducted using a self-made dataset of pointer-style instruments from nuclear power plants. Results showed that this method achieved a recognition accuracy of 94.5% at a precision level of 2.5, with an average error of 1.522% and an average total processing time of 0.56 seconds, demonstrating strong performance.展开更多
基金fully supported by National Natural Science Foundation of China(61871422)Natural Science Foundation of Sichuan Province(2023NSFSC1422)Central Universities of South west Minzu University(ZYN2022032)。
文摘There are all kinds of unknown and known signals in the actual electromagnetic environment,which hinders the development of practical cognitive radio applications.However,most existing signal recognition models are difficult to discover unknown signals while recognizing known ones.In this paper,a compact manifold mixup feature-based open-set recognition approach(OR-CMMF)is proposed to address the above problem.First,the proposed approach utilizes the center loss to constrain decision boundaries so that it obtains the compact latent signal feature representations and extends the low-confidence feature space.Second,the latent signal feature representations are used to construct synthetic representations as substitutes for unknown categories of signals.Then,these constructed representations can occupy the extended low-confidence space.Finally,the proposed approach applies the distillation loss to adjust the decision boundaries between the known categories signals and the constructed unknown categories substitutes so that it accurately discovers unknown signals.The OR-CMMF approach outperformed other state-of-the-art open-set recognition methods in comprehensive recognition performance and running time,as demonstrated by simulation experiments on two public datasets RML2016.10a and ORACLE.
基金supported by the National Natural Science Foundation of China(Nos.U20B2067).
文摘In the traditional pattern classification method,it usually assumes that the object to be classified must lie in one of given(known)classes of the training data set.However,the training data set may not contain the class of some objects in practice,and this is considered as an Open-Set Recognition(OSR)problem.In this paper,we propose a new progressive open-set recognition method with adaptive probability threshold.Both the labeled training data and the test data(objects to be classified)are put into a common data set,and the k-Nearest Neighbors(k-NNs)of each object are sought in this common set.Then,we can determine the probability of object lying in the given classes.If the majority of k-NNs of the object are from labeled training data,this object quite likely belongs to one of the given classes,and the density of the object and its neighbors is taken into account here.However,when most of k-NNs are from the unlabeled test data set,the class of object is considered very uncertain because the class of test data is unknown,and this object cannot be classified in this step.Once the objects belonging to known classes with high probability are all found,we re-calculate the probability of the other uncertain objects belonging to known classes based on the labeled training data and the objects marked with the estimated probability.Such iteration will stop when the probabilities of all the objects belonging to known classes are not changed.Then,a modified Otsu’s method is employed to adaptively seek the probability threshold for the final classification.If the probability of object belonging to known classes is smaller than this threshold,it will be assigned to the ignorant(unknown)class that is not included in training data set.The other objects will be committed to a specific class.The effectiveness of the proposed method has been validated using some experiments.
基金the financial support of this work by the National Natural Science Foundation of China(No.52373093)Excellent Youth Found of Natural Science Foundation of Henan Province(No.242300421062)+1 种基金Central Plains Youth Top notch Talent Program of Henan Provincethe 111 project(No.D18023).
文摘The advancement of wearable sensing technologies demands multifunctional materials that integrate high sensitivity,environmental resilience,and intelligent signal processing.In this work,a flexible hydrophobic conductive yarn(FCB@SY)featuring a controllable microcrack structure is developed via a synergistic approach combining ultrasonic swelling and non-solvent induced phase separation(NIPS).By embedding a robust conductive network and engineering microcrack morphology,the resulting sensor achieves an ultrahigh gauge factor(GF≈12,670),an ultrabroad working range(0%-547%),a low detection limit(0.5%),rapid response/recovery time(140 ms/140 ms),and outstanding durability over 10,000 cycles.Furthermore,the hydrophobic surface endowed by conductive coatings imparts exceptional chemical stability against acidic and alkaline environments,as well as reliable waterproof performance.This enables consistent functionality under harsh conditions,including underwater operation.Integrated with machine learning algorithms,the FCB@SY-based intelligent sensing system demonstrates dualmode capabilities in human motion tracking and gesture recognition,offering significant potential for applications in wearable electronics,human-machine interfaces,and soft robotics.
基金supported by the National Key Research and Development Program of China for International Cooperation under Grant 2023YFE0117100the National Natural Science Foundation of China(Nos.62074040 and 62074045).
文摘Photoresponsive memristors(i.e.,photomemristors)have been recently highly regarded to tackle data latency and energy consumption challenges in conventional Von Neumann architecture-based image recognition systems.However,their efficacy in recognizing low-contrast images is quite limited,and while preprocessing algorithms are usually employed to enhance these images,which naturally introduce delays that hinder real-time recognition in complex conditions.To address this challenge,here we present a selfdriven polarization-sensitive ferroelectric photomemristor inspired by advanced biological systems.The proposed prototype device is engineered to extract image polarization information,enabling real-time and in-situ enhanced image recognition and classification capabilities.By combining the anisotropic optical feature of the two-dimensional material(ReSe_(2))and ferroelectric polarization of singlecrystalline diisopropylammonium bromide(DIPAB)thin film,tunable and self-driven polarized responsiveness with intelligence was achieved.With remarkable optoelectronic synaptic characteristics of the fabricated device,a significant enhancement was demonstrated in recognition probability—averaging an impressive 85.9% for low-contrast scenarios,in contrast to the mere 47.5% exhibited by traditional photomemristors.This holds substantial implications for the detection and recognition of subtle information in diverse scenes such as autonomous driving,medical imaging,and astronomical observation.
基金financially supported by the National Science and Technology Major Project——Deep Earth Probe and Mineral Resources Exploration(No.2024ZD1003701)the National Key R&D Program of China(No.2022YFC2905004)。
文摘An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects.
文摘In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,existing methods face two major challenges:traditional feature engineering suffers from insufficient effective dimensionality in the feature space due to kinematic coupling,making it difficult to distinguish essential differences between maneuvers,while end-to-end deep learning models lack controllability in implicit feature learning and fail to model high-order long-range temporal dependencies.This paper proposes a trajectory feature pre-extraction method based on a Long-range Masked Autoencoder(LMAE),incorporating three key innovations:(1)Random Fragment High-ratio Masking(RFH-Mask),which enforces the model to learn long-range temporal correlations by masking 80%of trajectory data while retaining continuous fragments;(2)Kalman Filter-Guided Objective Function(KFG-OF),integrating trajectory continuity constraints to align the feature space with kinematic principles;and(3)Two-stage Decoupled Architecture,enabling efficient and controllable feature learning through unsupervised pre-training and frozen-feature transfer.Experimental results demonstrate that LMAE significantly improves the average recognition accuracy for 20-class maneuvers compared to traditional end-to-end models,while significantly accelerating convergence speed.The contributions of this work lie in:introducing high-masking-rate autoencoders into low-informationdensity trajectory analysis,proposing a feature engineering framework with enhanced controllability and efficiency,and providing a novel technical pathway for intelligent air combat decision-making systems.
文摘Open-set recognition(OSR)is a realistic problem in wireless signal recogni-tion,which means that during the inference phase there may appear unknown classes not seen in the training phase.The method of intra-class splitting(ICS)that splits samples of known classes to imitate unknown classes has achieved great performance.However,this approach relies too much on the predefined splitting ratio and may face huge performance degradation in new environment.In this paper,we train a multi-task learning(MTL)net-work based on the characteristics of wireless signals to improve the performance in new scenes.Besides,we provide a dynamic method to decide the splitting ratio per class to get more precise outer samples.To be specific,we make perturbations to the sample from the center of one class toward its adversarial direction and the change point of confidence scores during this process is used as the splitting threshold.We conduct several experi-ments on one wireless signal dataset collected at 2.4 GHz ISM band by LimeSDR and one open modulation recognition dataset,and the analytical results demonstrate the effective-ness of the proposed method.
文摘In high-intensity electromagnetic warfare,radar systems are persistently subjected to multi-jammer attacks,including potentially novel unknown jamming types that may emerge exclusively under wartime conditions.These jamming signals severely degrade radar detection performance.Precise recognition of these unknown and compound jamming signals is critical to enhancing the anti-jamming capabilities and overall reliability of radar systems.To address this challenge,this article proposes a novel open-set compound jamming cognition(OSCJC)method.The proposed method employs a detection-classification dual-network architecture,which not only overcomes the false alarm and misdetection issues of traditional closed-set recognition methods when dealing with unknown jamming but also effectively addresses the performance bottleneck of existing open-set recognition techniques focusing on single jamming scenarios in compound jamming environments.To achieve unknown jamming detection,we first employ a consistency labeling strategy to train the detection network using diverse known jamming samples.This strategy enables the network to acquire highly generalizable jamming features,thereby accurately localizing candidate regions for individual jamming components within compound jamming.Subsequently,we introduce contrastive learning to optimize the classification network,significantly enhancing both intra-class clustering and inter-class separability in the jamming feature space.This method not only improves the recognition accuracy of the classification network for known jamming types but also enhances its sensitivity to unknown jamming types.Simulations and experimental data are used to verify the effectiveness of the proposed OSCJC method.Compared with the state-of-the-art open-set recognition methods,the proposed method demonstrates superior recognition accuracy and enhanced environmental adaptability.
基金supported by China Academy of Railway Sciences Corporation Limited(No.2021YJ127).
文摘Artificial intelligence,such as deep learning technology,has advanced the study of facial expression recognition since facial expression carries rich emotional information and is significant for many naturalistic situations.To pursue a high facial expression recognition accuracy,the network model of deep learning is generally designed to be very deep while the model’s real-time performance is typically constrained and limited.With MobileNetV3,a lightweight model with a good accuracy,a further study is conducted by adding a basic ResNet module to each of its existing modules and an SSH(Single Stage Headless Face Detector)context module to expand the model’s perceptual field.In this article,the enhanced model named Res-MobileNetV3,could alleviate the subpar of real-time performance and compress the size of large network models,which can process information at a rate of up to 33 frames per second.Although the improved model has been verified to be slightly inferior to the current state-of-the-art method in aspect of accuracy rate on the publically available face expression datasets,it can bring a good balance on accuracy,real-time performance,model size and model complexity in practical applications.
文摘In the era of artificial intelligence(AI),healthcare and medical sciences are inseparable from different AI technologies[1].ChatGPT once shocked the medical field,but the latest AI model DeepSeek has recently taken the lead[2].PubMed indexed publications on DeepSeek are evolving[3],but limited to editorials and news articles.In this Letter,we explore the use of DeepSeek in early symptoms recognition for stroke care.To the best of our knowledge,this is the first DeepSeek-related writing on stroke.
文摘A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.
基金the National Key Research and Development Program of China,Grant/Award Number:2020YFB1711704the National Natural Science Foundation of China,Grant/Award Number:62272337。
文摘As an essential field of multimedia and computer vision,3D shape recognition has attracted much research attention in recent years.Multiview-based approaches have demonstrated their superiority in generating effective 3D shape representations.Typical methods usually extract the multiview global features and aggregate them together to generate 3D shape descriptors.However,there exist two disadvantages:First,the mainstream methods ignore the comprehensive exploration of local information in each view.Second,many approaches roughly aggregate multiview features by adding or concatenating them together.The information loss for some discriminative characteristics limits the representation effectiveness.To address these problems,a novel architecture named region-based joint attention network(RJAN)was proposed.Specifically,the authors first design a hierarchical local information exploration module for view descriptor extraction.The region-to-region and channel-to-channel relationships from different granularities can be comprehensively explored and utilised to provide more discriminative characteristics for view feature learning.Subsequently,a novel relation-aware view aggregation module is designed to aggregate the multiview features for shape descriptor generation,considering the view-to-view relationships.Extensive experiments were conducted on three public databases:ModelNet40,ModelNet10,and ShapeNetCore55.RJAN achieves state-of-the-art performance in the tasks of 3D shape classification and 3D shape retrieval,which demonstrates the effectiveness of RJAN.The code has been released on https://github.com/slurrpp/RJAN.
基金supported by the Beijing Natural Science Foundation (5252014)the National Natural Science Foundation of China (62303063)。
文摘Bird vocalizations are pivotal for ecological monitoring,providing insights into biodiversity and ecosystem health.Traditional recognition methods often neglect phase information,resulting in incomplete feature representation.In this paper,we introduce a novel approach to bird vocalization recognition(BVR)that integrates both amplitude and phase information,leading to enhanced species identification.We propose MHARes Net,a deep learning(DL)model that employs residual blocks and a multi-head attention mechanism to capture salient features from logarithmic power(POW),Instantaneous Frequency(IF),and Group Delay(GD)extracted from bird vocalizations.Experiments on three bird vocalization datasets demonstrate our method's superior performance,achieving accuracy rates of 94%,98.9%,and 87.1%respectively.These results indicate that our approach provides a more effective representation of bird vocalizations,outperforming existing methods.This integration of phase information in BVR is innovative and significantly advances the field of automatic bird monitoring technology,offering valuable tools for ecological research and conservation efforts.
文摘In computer vision and artificial intelligence,automatic facial expression-based emotion identification of humans has become a popular research and industry problem.Recent demonstrations and applications in several fields,including computer games,smart homes,expression analysis,gesture recognition,surveillance films,depression therapy,patientmonitoring,anxiety,and others,have brought attention to its significant academic and commercial importance.This study emphasizes research that has only employed facial images for face expression recognition(FER),because facial expressions are a basic way that people communicate meaning to each other.The immense achievement of deep learning has resulted in a growing use of its much architecture to enhance efficiency.This review is on machine learning,deep learning,and hybrid methods’use of preprocessing,augmentation techniques,and feature extraction for temporal properties of successive frames of data.The following section gives a brief summary of assessment criteria that are accessible to the public and then compares them with benchmark results the most trustworthy way to assess FER-related research topics statistically.In this review,a brief synopsis of the subject matter may be beneficial for novices in the field of FER as well as seasoned scholars seeking fruitful avenues for further investigation.The information conveys fundamental knowledge and provides a comprehensive understanding of the most recent state-of-the-art research.
基金the National Natural Science Foundation of China(Grant No.62172132)Public Welfare Technology Research Project of Zhejiang Province(Grant No.LGF21F020014)the Opening Project of Key Laboratory of Public Security Information Application Based on Big-Data Architecture,Ministry of Public Security of Zhejiang Police College(Grant No.2021DSJSYS002).
文摘Seal authentication is an important task for verifying the authenticity of stamped seals used in various domains to protect legal documents from tampering and counterfeiting.Stamped seal inspection is commonly audited manually to ensure document authenticity.However,manual assessment of seal images is tedious and laborintensive due to human errors,inconsistent placement,and completeness of the seal.Traditional image recognition systems are inadequate enough to identify seal types accurately,necessitating a neural network-based method for seal image recognition.However,neural network-based classification algorithms,such as Residual Networks(ResNet)andVisualGeometryGroup with 16 layers(VGG16)yield suboptimal recognition rates on stamp datasets.Additionally,the fixed training data categories make handling new categories to be a challenging task.This paper proposes amulti-stage seal recognition algorithmbased on Siamese network to overcome these limitations.Firstly,the seal image is pre-processed by applying an image rotation correction module based on Histogram of Oriented Gradients(HOG).Secondly,the similarity between input seal image pairs is measured by utilizing a similarity comparison module based on the Siamese network.Finally,we compare the results with the pre-stored standard seal template images in the database to obtain the seal type.To evaluate the performance of the proposed method,we further create a new seal image dataset that contains two subsets with 210,000 valid labeled pairs in total.The proposed work has a practical significance in industries where automatic seal authentication is essential as in legal,financial,and governmental sectors,where automatic seal recognition can enhance document security and streamline validation processes.Furthermore,the experimental results show that the proposed multi-stage method for seal image recognition outperforms state-of-the-art methods on the two established datasets.
文摘Target occlusion poses a significant challenge in computer vision,particularly in agricultural applications,where occlusion of crops can obscure key features and impair the model’s recognition performance.To address this challenge,a mushroom recognition method was proposed based on an erase module integrated into the EL-DenseNet model.EL-DenseNet,an extension of DenseNet,incorporated an erase attention module designed to enhance sensitivity to visible features.The erase module helped eliminate complex backgrounds and irrelevant information,allowing the mushroom body to be preserved and increasing recognition accuracy in cluttered environments.Considering the difficulty in distinguishing similar mushroom species,label smoothing regularization was employed to mitigate mislabeling errors that commonly arose from human observers.This strategy converted hard labels into soft labels during training,reducing the model’s overreliance on noisy labels and improving its generalization ability.Experimental results showed that the proposed EL-DenseNet,when combined with transfer learning,achieved a recognition accuracy of 96.7%for mushrooms in occluded and complex backgrounds.Compared with the original DenseNet and other classic models,this approach demonstrated superior accuracy and robustness,providing a promising solution for intelligent mushroom recognition.
基金funded by the ICT Division of theMinistry of Posts,Telecommunications,and Information Technology of Bangladesh under Grant Number 56.00.0000.052.33.005.21-7(Tracking No.22FS15306)support from the University of Rajshahi.
文摘The Internet of Things(IoT)and mobile technology have significantly transformed healthcare by enabling real-time monitoring and diagnosis of patients.Recognizing Medical-Related Human Activities(MRHA)is pivotal for healthcare systems,particularly for identifying actions critical to patient well-being.However,challenges such as high computational demands,low accuracy,and limited adaptability persist in Human Motion Recognition(HMR).While some studies have integrated HMR with IoT for real-time healthcare applications,limited research has focused on recognizing MRHA as essential for effective patient monitoring.This study proposes a novel HMR method tailored for MRHA detection,leveraging multi-stage deep learning techniques integrated with IoT.The approach employs EfficientNet to extract optimized spatial features from skeleton frame sequences using seven Mobile Inverted Bottleneck Convolutions(MBConv)blocks,followed by Convolutional Long Short Term Memory(ConvLSTM)to capture spatio-temporal patterns.A classification module with global average pooling,a fully connected layer,and a dropout layer generates the final predictions.The model is evaluated on the NTU RGB+D 120 and HMDB51 datasets,focusing on MRHA such as sneezing,falling,walking,sitting,etc.It achieves 94.85%accuracy for cross-subject evaluations and 96.45%for cross-view evaluations on NTU RGB+D 120,along with 89.22%accuracy on HMDB51.Additionally,the system integrates IoT capabilities using a Raspberry Pi and GSM module,delivering real-time alerts via Twilios SMS service to caregivers and patients.This scalable and efficient solution bridges the gap between HMR and IoT,advancing patient monitoring,improving healthcare outcomes,and reducing costs.
基金supported by the National Natural Science Foundation of China(62272049,62236006,62172045)the Key Projects of Beijing Union University(ZKZD202301).
文摘In recent years,gait-based emotion recognition has been widely applied in the field of computer vision.However,existing gait emotion recognition methods typically rely on complete human skeleton data,and their accuracy significantly declines when the data is occluded.To enhance the accuracy of gait emotion recognition under occlusion,this paper proposes a Multi-scale Suppression Graph ConvolutionalNetwork(MS-GCN).TheMS-GCN consists of three main components:Joint Interpolation Module(JI Moudle),Multi-scale Temporal Convolution Network(MS-TCN),and Suppression Graph Convolutional Network(SGCN).The JI Module completes the spatially occluded skeletal joints using the(K-Nearest Neighbors)KNN interpolation method.The MS-TCN employs convolutional kernels of various sizes to comprehensively capture the emotional information embedded in the gait,compensating for the temporal occlusion of gait information.The SGCN extracts more non-prominent human gait features by suppressing the extraction of key body part features,thereby reducing the negative impact of occlusion on emotion recognition results.The proposed method is evaluated on two comprehensive datasets:Emotion-Gait,containing 4227 real gaits from sources like BML,ICT-Pollick,and ELMD,and 1000 synthetic gaits generated using STEP-Gen technology,and ELMB,consisting of 3924 gaits,with 1835 labeled with emotions such as“Happy,”“Sad,”“Angry,”and“Neutral.”On the standard datasets Emotion-Gait and ELMB,the proposed method achieved accuracies of 0.900 and 0.896,respectively,attaining performance comparable to other state-ofthe-artmethods.Furthermore,on occlusion datasets,the proposedmethod significantly mitigates the performance degradation caused by occlusion compared to other methods,the accuracy is significantly higher than that of other methods.
基金supported by the Key Research and Development Program of Jiangsu Province under Grant BE2022059-3,CTBC Bank through the Industry-Academia Cooperation Project,as well as by the Ministry of Science and Technology of Taiwan through Grants MOST-108-2218-E-002-055,MOST-109-2223-E-009-002-MY3,MOST-109-2218-E-009-025,and MOST431109-2218-E-002-015.
文摘Micro-expressions(ME)recognition is a complex task that requires advanced techniques to extract informative features fromfacial expressions.Numerous deep neural networks(DNNs)with convolutional structures have been proposed.However,unlike DNNs,shallow convolutional neural networks often outperform deeper models in mitigating overfitting,particularly with small datasets.Still,many of these methods rely on a single feature for recognition,resulting in an insufficient ability to extract highly effective features.To address this limitation,in this paper,an Improved Dual-stream Shallow Convolutional Neural Network based on an Extreme Gradient Boosting Algorithm(IDSSCNN-XgBoost)is introduced for ME Recognition.The proposed method utilizes a dual-stream architecture where motion vectors(temporal features)are extracted using Optical Flow TV-L1 and amplify subtle changes(spatial features)via EulerianVideoMagnification(EVM).These features are processed by IDSSCNN,with an attention mechanism applied to refine the extracted effective features.The outputs are then fused,concatenated,and classified using the XgBoost algorithm.This comprehensive approach significantly improves recognition accuracy by leveraging the strengths of both temporal and spatial information,supported by the robust classification power of XgBoost.The proposed method is evaluated on three publicly available ME databases named Chinese Academy of Sciences Micro-expression Database(CASMEII),Spontaneous Micro-Expression Database(SMICHS),and Spontaneous Actions and Micro-Movements(SAMM).Experimental results indicate that the proposed model can achieve outstanding results compared to recent models.The accuracy results are 79.01%,69.22%,and 68.99%on CASMEII,SMIC-HS,and SAMM,and the F1-score are 75.47%,68.91%,and 63.84%,respectively.The proposed method has the advantage of operational efficiency and less computational time.
文摘Pointer instruments are widely used in the nuclear power industry. Addressing the issues of low accuracy and slow detection speed in recognizing pointer meter readings under varying types and distances, this paper proposes a recognition method based on YOLOv8 and DeepLabv3+. To improve the image input quality of the DeepLabv3+ model, the YOLOv8 detector is used to quickly locate the instrument region and crop it as the input image for recognition. To enhance the accuracy and speed of pointer recognition, the backbone network of DeepLabv3+ was replaced with Mo-bileNetv3, and the ECA+ module was designed to replace its SE module, reducing model parameters while improving recognition precision. The decoder’s fourfold-up sampling was replaced with two twofold-up samplings, and shallow feature maps were fused with encoder features of the corresponding size. The CBAM module was introduced to improve the segmentation accuracy of the pointer. Experiments were conducted using a self-made dataset of pointer-style instruments from nuclear power plants. Results showed that this method achieved a recognition accuracy of 94.5% at a precision level of 2.5, with an average error of 1.522% and an average total processing time of 0.56 seconds, demonstrating strong performance.