The advancement of wearable sensing technologies demands multifunctional materials that integrate high sensitivity,environmental resilience,and intelligent signal processing.In this work,a flexible hydrophobic conduct...The advancement of wearable sensing technologies demands multifunctional materials that integrate high sensitivity,environmental resilience,and intelligent signal processing.In this work,a flexible hydrophobic conductive yarn(FCB@SY)featuring a controllable microcrack structure is developed via a synergistic approach combining ultrasonic swelling and non-solvent induced phase separation(NIPS).By embedding a robust conductive network and engineering microcrack morphology,the resulting sensor achieves an ultrahigh gauge factor(GF≈12,670),an ultrabroad working range(0%-547%),a low detection limit(0.5%),rapid response/recovery time(140 ms/140 ms),and outstanding durability over 10,000 cycles.Furthermore,the hydrophobic surface endowed by conductive coatings imparts exceptional chemical stability against acidic and alkaline environments,as well as reliable waterproof performance.This enables consistent functionality under harsh conditions,including underwater operation.Integrated with machine learning algorithms,the FCB@SY-based intelligent sensing system demonstrates dualmode capabilities in human motion tracking and gesture recognition,offering significant potential for applications in wearable electronics,human-machine interfaces,and soft robotics.展开更多
Photoresponsive memristors(i.e.,photomemristors)have been recently highly regarded to tackle data latency and energy consumption challenges in conventional Von Neumann architecture-based image recognition systems.Howe...Photoresponsive memristors(i.e.,photomemristors)have been recently highly regarded to tackle data latency and energy consumption challenges in conventional Von Neumann architecture-based image recognition systems.However,their efficacy in recognizing low-contrast images is quite limited,and while preprocessing algorithms are usually employed to enhance these images,which naturally introduce delays that hinder real-time recognition in complex conditions.To address this challenge,here we present a selfdriven polarization-sensitive ferroelectric photomemristor inspired by advanced biological systems.The proposed prototype device is engineered to extract image polarization information,enabling real-time and in-situ enhanced image recognition and classification capabilities.By combining the anisotropic optical feature of the two-dimensional material(ReSe_(2))and ferroelectric polarization of singlecrystalline diisopropylammonium bromide(DIPAB)thin film,tunable and self-driven polarized responsiveness with intelligence was achieved.With remarkable optoelectronic synaptic characteristics of the fabricated device,a significant enhancement was demonstrated in recognition probability—averaging an impressive 85.9% for low-contrast scenarios,in contrast to the mere 47.5% exhibited by traditional photomemristors.This holds substantial implications for the detection and recognition of subtle information in diverse scenes such as autonomous driving,medical imaging,and astronomical observation.展开更多
An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction...An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects.展开更多
In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,...In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,existing methods face two major challenges:traditional feature engineering suffers from insufficient effective dimensionality in the feature space due to kinematic coupling,making it difficult to distinguish essential differences between maneuvers,while end-to-end deep learning models lack controllability in implicit feature learning and fail to model high-order long-range temporal dependencies.This paper proposes a trajectory feature pre-extraction method based on a Long-range Masked Autoencoder(LMAE),incorporating three key innovations:(1)Random Fragment High-ratio Masking(RFH-Mask),which enforces the model to learn long-range temporal correlations by masking 80%of trajectory data while retaining continuous fragments;(2)Kalman Filter-Guided Objective Function(KFG-OF),integrating trajectory continuity constraints to align the feature space with kinematic principles;and(3)Two-stage Decoupled Architecture,enabling efficient and controllable feature learning through unsupervised pre-training and frozen-feature transfer.Experimental results demonstrate that LMAE significantly improves the average recognition accuracy for 20-class maneuvers compared to traditional end-to-end models,while significantly accelerating convergence speed.The contributions of this work lie in:introducing high-masking-rate autoencoders into low-informationdensity trajectory analysis,proposing a feature engineering framework with enhanced controllability and efficiency,and providing a novel technical pathway for intelligent air combat decision-making systems.展开更多
In the era of artificial intelligence(AI),healthcare and medical sciences are inseparable from different AI technologies[1].ChatGPT once shocked the medical field,but the latest AI model DeepSeek has recently taken th...In the era of artificial intelligence(AI),healthcare and medical sciences are inseparable from different AI technologies[1].ChatGPT once shocked the medical field,but the latest AI model DeepSeek has recently taken the lead[2].PubMed indexed publications on DeepSeek are evolving[3],but limited to editorials and news articles.In this Letter,we explore the use of DeepSeek in early symptoms recognition for stroke care.To the best of our knowledge,this is the first DeepSeek-related writing on stroke.展开更多
A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can ...A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.展开更多
Automated behavior monitoring of macaques offers transformative potential for advancing biomedical research and animal welfare.However,reliably identifying individual macaques in group environments remains a significa...Automated behavior monitoring of macaques offers transformative potential for advancing biomedical research and animal welfare.However,reliably identifying individual macaques in group environments remains a significant challenge.This study introduces ACE-YOLOX,a lightweight facial recognition model tailored for captive macaques.ACE-YOLOX incorporates Efficient Channel Attention(ECA),Complete Intersection over Union loss(CIoU),and Adaptive Spatial Feature Fusion(ASFF)into the YOLOX framework,enhancing prediction accuracy while reducing computational complexity.These integrated approaches enable effective multiscale feature extraction.Using a dataset comprising 179400 labeled facial images from 1196 macaques,ACE-YOLOX surpassed the performance of classical object detection models,demonstrating superior accuracy and real-time processing capabilities.An Android application was also developed to deploy ACE-YOLOX on smartphones,enabling on-device,real-time macaque recognition.Our experimental results highlight the potential of ACE-YOLOX as a non-invasive identification tool,offering an important foundation for future studies in macaque facial expression recognition,cognitive psychology,and social behavior.展开更多
Pill image recognition is an important field in computer vision.It has become a vital technology in healthcare and pharmaceuticals due to the necessity for precise medication identification to prevent errors and ensur...Pill image recognition is an important field in computer vision.It has become a vital technology in healthcare and pharmaceuticals due to the necessity for precise medication identification to prevent errors and ensure patient safety.This survey examines the current state of pill image recognition,focusing on advancements,methodologies,and the challenges that remain unresolved.It provides a comprehensive overview of traditional image processing-based,machine learning-based,deep learning-based,and hybrid-based methods,and aims to explore the ongoing difficulties in the field.We summarize and classify the methods used in each article,compare the strengths and weaknesses of traditional image processing-based,machine learning-based,deep learning-based,and hybrid-based methods,and review benchmark datasets for pill image recognition.Additionally,we compare the performance of proposed methods on popular benchmark datasets.This survey applies recent advancements,such as Transformer models and cutting-edge technologies like Augmented Reality(AR),to discuss potential research directions and conclude the review.By offering a holistic perspective,this paper aims to serve as a valuable resource for researchers and practitioners striving to advance the field of pill image recognition.展开更多
The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,wi...The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,with applications such as the gravity-only aerial deployment of high-aspect-ratio solar-powered UAVs,and aerial takeoff of fixed-wing drones in Mars research.However,the significant morphological changes during deployment are accompanied by strong nonlinear dynamic aerodynamic forces,which result in multiple degrees of freedom and an unstable character.This hinders the description and analysis of unknown dynamic behaviors,further leading to difficulties in the design of deployment strategies and flight control.To address this issue,this paper proposes an analysis method for dynamic behaviors during aerial deployment based on the Variational Autoencoder(VAE).Focusing on the gravity-only deployment problem of highaspect-ratio foldable-wing UAVs,the method encodes the multi-degree-of-freedom unstable motion signals into a low-dimensional feature space through a data-driven approach.By clustering in the feature space,this paper identifies and studies several dynamic behaviors during aerial deployment.The research presented in this paper offers a new method and perspective for feature extraction and analysis of complex and difficult-to-describe extreme flight dynamics,guiding the research on aerial deployment drones design and control strategies.展开更多
Human activity recognition is a significant area of research in artificial intelligence for surveillance,healthcare,sports,and human-computer interaction applications.The article benchmarks the performance of You Only...Human activity recognition is a significant area of research in artificial intelligence for surveillance,healthcare,sports,and human-computer interaction applications.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The dataset consists of 14,186 images across 19 activity classes,from dynamic activities such as running and swimming to static activities such as sitting and sleeping.Preprocessing included resizing all images to 512512 pixels,annotating them in YOLO’s bounding box format,and applying data augmentation methods such as flipping,rotation,and cropping to enhance model generalization.The proposed model was trained for 100 epochs with adaptive learning rate methods and hyperparameter optimization for performance improvement,with a mAP@0.5 of 74.93%and a mAP@0.5-0.95 of 64.11%,outperforming previous versions of YOLO(v10,v9,and v8)and general-purpose architectures like ResNet50 and EfficientNet.It exhibited improved precision and recall for all activity classes with high precision values of 0.76 for running,0.79 for swimming,0.80 for sitting,and 0.81 for sleeping,and was tested for real-time deployment with an inference time of 8.9 ms per image,being computationally light.Proposed YOLOv11’s improvements are attributed to architectural advancements like a more complex feature extraction process,better attention modules,and an anchor-free detection mechanism.While YOLOv10 was extremely stable in static activity recognition,YOLOv9 performed well in dynamic environments but suffered from overfitting,and YOLOv8,while being a decent baseline,failed to differentiate between overlapping static activities.The experimental results determine proposed YOLOv11 to be the most appropriate model,providing an ideal balance between accuracy,computational efficiency,and robustness for real-world deployment.Nevertheless,there exist certain issues to be addressed,particularly in discriminating against visually similar activities and the use of publicly available datasets.Future research will entail the inclusion of 3D data and multimodal sensor inputs,such as depth and motion information,for enhancing recognition accuracy and generalizability to challenging real-world environments.展开更多
There are all kinds of unknown and known signals in the actual electromagnetic environment,which hinders the development of practical cognitive radio applications.However,most existing signal recognition models are di...There are all kinds of unknown and known signals in the actual electromagnetic environment,which hinders the development of practical cognitive radio applications.However,most existing signal recognition models are difficult to discover unknown signals while recognizing known ones.In this paper,a compact manifold mixup feature-based open-set recognition approach(OR-CMMF)is proposed to address the above problem.First,the proposed approach utilizes the center loss to constrain decision boundaries so that it obtains the compact latent signal feature representations and extends the low-confidence feature space.Second,the latent signal feature representations are used to construct synthetic representations as substitutes for unknown categories of signals.Then,these constructed representations can occupy the extended low-confidence space.Finally,the proposed approach applies the distillation loss to adjust the decision boundaries between the known categories signals and the constructed unknown categories substitutes so that it accurately discovers unknown signals.The OR-CMMF approach outperformed other state-of-the-art open-set recognition methods in comprehensive recognition performance and running time,as demonstrated by simulation experiments on two public datasets RML2016.10a and ORACLE.展开更多
Molecular recognition of bioreceptors and enzymes relies on orthogonal interactions with small molecules within their cavity. To date, Chinese scientists have developed three types of strategies for introducing active...Molecular recognition of bioreceptors and enzymes relies on orthogonal interactions with small molecules within their cavity. To date, Chinese scientists have developed three types of strategies for introducing active sites inside the cavity of macrocyclic arenes to better mimic molecular recognition of bioreceptors and enzymes.The editorial aims to enlighten scientists in this field when they develop novel macrocycles for molecular recognition, supramolecular assembly, and applications.展开更多
To address the issue of low recognition accuracy for eight types of behaviors including standing,walking,drinking,lying,eating,mounting,fighting and limping in complex multi-cow farm environments,a multi-target cow be...To address the issue of low recognition accuracy for eight types of behaviors including standing,walking,drinking,lying,eating,mounting,fighting and limping in complex multi-cow farm environments,a multi-target cow behavior recognition method based on an improved YOLOv11n algorithm was proposed.The detection capability for small targets in images was enhanced by incorporating a DASI module into the backbone network and a MDCR module into the neck network,based on YOLOv11.The improved YOLOv11 algorithm increased the mean average precision from the original 89.5%to 93%,with particularly notable improvements of 8.7%and 6.3%in the average precision for recognizing drinking and walking behaviors,respectively.These results fully demonstrate that the proposed method enhances the model s ability to recognize cow behaviors.展开更多
Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions...Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions.Existing methods can be categorized into motion-level,event-level,and story-level ones based on spatiotemporal granularity.However,single-modal approaches struggle to capture complex behavioral semantics and human factors.Therefore,in recent years,vision-language models(VLMs)have been introduced into this field,providing new research perspectives for VAR.In this paper,we systematically review spatiotemporal hierarchical methods in VAR and explore how the introduction of large models has advanced the field.Additionally,we propose the concept of“Factor”to identify and integrate key information from both visual and textual modalities,enhancing multimodal alignment.We also summarize various multimodal alignment methods and provide in-depth analysis and insights into future research directions.展开更多
Accessible communication based on sign language recognition(SLR)is the key to emergency medical assistance for the hearing-impaired community.Balancing the capture of both local and global information in SLR for emerg...Accessible communication based on sign language recognition(SLR)is the key to emergency medical assistance for the hearing-impaired community.Balancing the capture of both local and global information in SLR for emergency medicine poses a significant challenge.To address this,we propose a novel approach based on the inter-learning of visual features between global and local information.Specifically,our method enhances the perception capabilities of the visual feature extractor by strategically leveraging the strengths of convolutional neural network(CNN),which are adept at capturing local features,and visual transformers which perform well at perceiving global features.Furthermore,to mitigate the issue of overfitting caused by the limited availability of sign language data for emergency medical applications,we introduce an enhanced short temporal module for data augmentation through additional subsequences.Experimental results on three publicly available sign language datasets demonstrate the efficacy of the proposed approach.展开更多
Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensi...Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research.展开更多
Convolutional neural networks(CNNs)exhibit superior performance in image feature extraction,making them extensively used in the area of traffic sign recognition.However,the design of existing traffic sign recognition ...Convolutional neural networks(CNNs)exhibit superior performance in image feature extraction,making them extensively used in the area of traffic sign recognition.However,the design of existing traffic sign recognition algorithms often relies on expert knowledge to enhance the image feature extraction networks,necessitating image preprocessing and model parameter tuning.This increases the complexity of the model design process.This study introduces an evolutionary neural architecture search(ENAS)algorithm for the automatic design of neural network models tailored for traffic sign recognition.By integrating the construction parameters of residual network(ResNet)into evolutionary algorithms(EAs),we automatically generate lightweight networks for traffic sign recognition,utilizing blocks as the fundamental building units.Experimental evaluations on the German traffic sign recognition benchmark(GTSRB)dataset reveal that the algorithm attains a recognition accuracy of 99.32%,with a mere 2.8×10^(6)parameters.Experimental results comparing the proposed method with other traffic sign recognition algorithms demonstrate that the method can more efficiently discover neural network architectures,significantly reducing the number of network parameters while maintaining recognition accuracy.展开更多
Artificial intelligence,such as deep learning technology,has advanced the study of facial expression recognition since facial expression carries rich emotional information and is significant for many naturalistic situ...Artificial intelligence,such as deep learning technology,has advanced the study of facial expression recognition since facial expression carries rich emotional information and is significant for many naturalistic situations.To pursue a high facial expression recognition accuracy,the network model of deep learning is generally designed to be very deep while the model’s real-time performance is typically constrained and limited.With MobileNetV3,a lightweight model with a good accuracy,a further study is conducted by adding a basic ResNet module to each of its existing modules and an SSH(Single Stage Headless Face Detector)context module to expand the model’s perceptual field.In this article,the enhanced model named Res-MobileNetV3,could alleviate the subpar of real-time performance and compress the size of large network models,which can process information at a rate of up to 33 frames per second.Although the improved model has been verified to be slightly inferior to the current state-of-the-art method in aspect of accuracy rate on the publically available face expression datasets,it can bring a good balance on accuracy,real-time performance,model size and model complexity in practical applications.展开更多
Bird vocalizations are pivotal for ecological monitoring,providing insights into biodiversity and ecosystem health.Traditional recognition methods often neglect phase information,resulting in incomplete feature repres...Bird vocalizations are pivotal for ecological monitoring,providing insights into biodiversity and ecosystem health.Traditional recognition methods often neglect phase information,resulting in incomplete feature representation.In this paper,we introduce a novel approach to bird vocalization recognition(BVR)that integrates both amplitude and phase information,leading to enhanced species identification.We propose MHARes Net,a deep learning(DL)model that employs residual blocks and a multi-head attention mechanism to capture salient features from logarithmic power(POW),Instantaneous Frequency(IF),and Group Delay(GD)extracted from bird vocalizations.Experiments on three bird vocalization datasets demonstrate our method's superior performance,achieving accuracy rates of 94%,98.9%,and 87.1%respectively.These results indicate that our approach provides a more effective representation of bird vocalizations,outperforming existing methods.This integration of phase information in BVR is innovative and significantly advances the field of automatic bird monitoring technology,offering valuable tools for ecological research and conservation efforts.展开更多
基金the financial support of this work by the National Natural Science Foundation of China(No.52373093)Excellent Youth Found of Natural Science Foundation of Henan Province(No.242300421062)+1 种基金Central Plains Youth Top notch Talent Program of Henan Provincethe 111 project(No.D18023).
文摘The advancement of wearable sensing technologies demands multifunctional materials that integrate high sensitivity,environmental resilience,and intelligent signal processing.In this work,a flexible hydrophobic conductive yarn(FCB@SY)featuring a controllable microcrack structure is developed via a synergistic approach combining ultrasonic swelling and non-solvent induced phase separation(NIPS).By embedding a robust conductive network and engineering microcrack morphology,the resulting sensor achieves an ultrahigh gauge factor(GF≈12,670),an ultrabroad working range(0%-547%),a low detection limit(0.5%),rapid response/recovery time(140 ms/140 ms),and outstanding durability over 10,000 cycles.Furthermore,the hydrophobic surface endowed by conductive coatings imparts exceptional chemical stability against acidic and alkaline environments,as well as reliable waterproof performance.This enables consistent functionality under harsh conditions,including underwater operation.Integrated with machine learning algorithms,the FCB@SY-based intelligent sensing system demonstrates dualmode capabilities in human motion tracking and gesture recognition,offering significant potential for applications in wearable electronics,human-machine interfaces,and soft robotics.
基金supported by the National Key Research and Development Program of China for International Cooperation under Grant 2023YFE0117100the National Natural Science Foundation of China(Nos.62074040 and 62074045).
文摘Photoresponsive memristors(i.e.,photomemristors)have been recently highly regarded to tackle data latency and energy consumption challenges in conventional Von Neumann architecture-based image recognition systems.However,their efficacy in recognizing low-contrast images is quite limited,and while preprocessing algorithms are usually employed to enhance these images,which naturally introduce delays that hinder real-time recognition in complex conditions.To address this challenge,here we present a selfdriven polarization-sensitive ferroelectric photomemristor inspired by advanced biological systems.The proposed prototype device is engineered to extract image polarization information,enabling real-time and in-situ enhanced image recognition and classification capabilities.By combining the anisotropic optical feature of the two-dimensional material(ReSe_(2))and ferroelectric polarization of singlecrystalline diisopropylammonium bromide(DIPAB)thin film,tunable and self-driven polarized responsiveness with intelligence was achieved.With remarkable optoelectronic synaptic characteristics of the fabricated device,a significant enhancement was demonstrated in recognition probability—averaging an impressive 85.9% for low-contrast scenarios,in contrast to the mere 47.5% exhibited by traditional photomemristors.This holds substantial implications for the detection and recognition of subtle information in diverse scenes such as autonomous driving,medical imaging,and astronomical observation.
基金financially supported by the National Science and Technology Major Project——Deep Earth Probe and Mineral Resources Exploration(No.2024ZD1003701)the National Key R&D Program of China(No.2022YFC2905004)。
文摘An image processing and deep learning method for identifying different types of rock images was proposed.Preprocessing,such as rock image acquisition,gray scaling,Gaussian blurring,and feature dimensionality reduction,was conducted to extract useful feature information and recognize and classify rock images using Tensor Flow-based convolutional neural network(CNN)and Py Qt5.A rock image dataset was established and separated into workouts,confirmation sets,and test sets.The framework was subsequently compiled and trained.The categorization approach was evaluated using image data from the validation and test datasets,and key metrics,such as accuracy,precision,and recall,were analyzed.Finally,the classification model conducted a probabilistic analysis of the measured data to determine the equivalent lithological type for each image.The experimental results indicated that the method combining deep learning,Tensor Flow-based CNN,and Py Qt5 to recognize and classify rock images has an accuracy rate of up to 98.8%,and can be successfully utilized for rock image recognition.The system can be extended to geological exploration,mine engineering,and other rock and mineral resource development to more efficiently and accurately recognize rock samples.Moreover,it can match them with the intelligent support design system to effectively improve the reliability and economy of the support scheme.The system can serve as a reference for supporting the design of other mining and underground space projects.
文摘In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,existing methods face two major challenges:traditional feature engineering suffers from insufficient effective dimensionality in the feature space due to kinematic coupling,making it difficult to distinguish essential differences between maneuvers,while end-to-end deep learning models lack controllability in implicit feature learning and fail to model high-order long-range temporal dependencies.This paper proposes a trajectory feature pre-extraction method based on a Long-range Masked Autoencoder(LMAE),incorporating three key innovations:(1)Random Fragment High-ratio Masking(RFH-Mask),which enforces the model to learn long-range temporal correlations by masking 80%of trajectory data while retaining continuous fragments;(2)Kalman Filter-Guided Objective Function(KFG-OF),integrating trajectory continuity constraints to align the feature space with kinematic principles;and(3)Two-stage Decoupled Architecture,enabling efficient and controllable feature learning through unsupervised pre-training and frozen-feature transfer.Experimental results demonstrate that LMAE significantly improves the average recognition accuracy for 20-class maneuvers compared to traditional end-to-end models,while significantly accelerating convergence speed.The contributions of this work lie in:introducing high-masking-rate autoencoders into low-informationdensity trajectory analysis,proposing a feature engineering framework with enhanced controllability and efficiency,and providing a novel technical pathway for intelligent air combat decision-making systems.
文摘In the era of artificial intelligence(AI),healthcare and medical sciences are inseparable from different AI technologies[1].ChatGPT once shocked the medical field,but the latest AI model DeepSeek has recently taken the lead[2].PubMed indexed publications on DeepSeek are evolving[3],but limited to editorials and news articles.In this Letter,we explore the use of DeepSeek in early symptoms recognition for stroke care.To the best of our knowledge,this is the first DeepSeek-related writing on stroke.
文摘A two-stage algorithm based on deep learning for the detection and recognition of can bottom spray codes and numbers is proposed to address the problems of small character areas and fast production line speeds in can bottom spray code number recognition.In the coding number detection stage,Differentiable Binarization Network is used as the backbone network,combined with the Attention and Dilation Convolutions Path Aggregation Network feature fusion structure to enhance the model detection effect.In terms of text recognition,using the Scene Visual Text Recognition coding number recognition network for end-to-end training can alleviate the problem of coding recognition errors caused by image color distortion due to variations in lighting and background noise.In addition,model pruning and quantization are used to reduce the number ofmodel parameters to meet deployment requirements in resource-constrained environments.A comparative experiment was conducted using the dataset of tank bottom spray code numbers collected on-site,and a transfer experiment was conducted using the dataset of packaging box production date.The experimental results show that the algorithm proposed in this study can effectively locate the coding of cans at different positions on the roller conveyor,and can accurately identify the coding numbers at high production line speeds.The Hmean value of the coding number detection is 97.32%,and the accuracy of the coding number recognition is 98.21%.This verifies that the algorithm proposed in this paper has high accuracy in coding number detection and recognition.
基金supported by the grants from Yunnan Province(202305AH340006,202305AH340007)CAS Light of West China Program(xbzg-zdsys-202213)。
文摘Automated behavior monitoring of macaques offers transformative potential for advancing biomedical research and animal welfare.However,reliably identifying individual macaques in group environments remains a significant challenge.This study introduces ACE-YOLOX,a lightweight facial recognition model tailored for captive macaques.ACE-YOLOX incorporates Efficient Channel Attention(ECA),Complete Intersection over Union loss(CIoU),and Adaptive Spatial Feature Fusion(ASFF)into the YOLOX framework,enhancing prediction accuracy while reducing computational complexity.These integrated approaches enable effective multiscale feature extraction.Using a dataset comprising 179400 labeled facial images from 1196 macaques,ACE-YOLOX surpassed the performance of classical object detection models,demonstrating superior accuracy and real-time processing capabilities.An Android application was also developed to deploy ACE-YOLOX on smartphones,enabling on-device,real-time macaque recognition.Our experimental results highlight the potential of ACE-YOLOX as a non-invasive identification tool,offering an important foundation for future studies in macaque facial expression recognition,cognitive psychology,and social behavior.
文摘Pill image recognition is an important field in computer vision.It has become a vital technology in healthcare and pharmaceuticals due to the necessity for precise medication identification to prevent errors and ensure patient safety.This survey examines the current state of pill image recognition,focusing on advancements,methodologies,and the challenges that remain unresolved.It provides a comprehensive overview of traditional image processing-based,machine learning-based,deep learning-based,and hybrid-based methods,and aims to explore the ongoing difficulties in the field.We summarize and classify the methods used in each article,compare the strengths and weaknesses of traditional image processing-based,machine learning-based,deep learning-based,and hybrid-based methods,and review benchmark datasets for pill image recognition.Additionally,we compare the performance of proposed methods on popular benchmark datasets.This survey applies recent advancements,such as Transformer models and cutting-edge technologies like Augmented Reality(AR),to discuss potential research directions and conclude the review.By offering a holistic perspective,this paper aims to serve as a valuable resource for researchers and practitioners striving to advance the field of pill image recognition.
基金co-supported by the Natural Science Basic Research Program of Shaanxi,China(No.2023-JC-QN-0043)the ND Basic Research Funds,China(No.G2022WD).
文摘The aerial deployment method enables Unmanned Aerial Vehicles(UAVs)to be directly positioned at the required altitude for their mission.This method typically employs folding technology to improve loading efficiency,with applications such as the gravity-only aerial deployment of high-aspect-ratio solar-powered UAVs,and aerial takeoff of fixed-wing drones in Mars research.However,the significant morphological changes during deployment are accompanied by strong nonlinear dynamic aerodynamic forces,which result in multiple degrees of freedom and an unstable character.This hinders the description and analysis of unknown dynamic behaviors,further leading to difficulties in the design of deployment strategies and flight control.To address this issue,this paper proposes an analysis method for dynamic behaviors during aerial deployment based on the Variational Autoencoder(VAE).Focusing on the gravity-only deployment problem of highaspect-ratio foldable-wing UAVs,the method encodes the multi-degree-of-freedom unstable motion signals into a low-dimensional feature space through a data-driven approach.By clustering in the feature space,this paper identifies and studies several dynamic behaviors during aerial deployment.The research presented in this paper offers a new method and perspective for feature extraction and analysis of complex and difficult-to-describe extreme flight dynamics,guiding the research on aerial deployment drones design and control strategies.
基金supported by King Saud University,Riyadh,Saudi Arabia,under Ongoing Research Funding Program(ORF-2025-951).
文摘Human activity recognition is a significant area of research in artificial intelligence for surveillance,healthcare,sports,and human-computer interaction applications.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The article benchmarks the performance of You Only Look Once version 11-based(YOLOv11-based)architecture for multi-class human activity recognition.The dataset consists of 14,186 images across 19 activity classes,from dynamic activities such as running and swimming to static activities such as sitting and sleeping.Preprocessing included resizing all images to 512512 pixels,annotating them in YOLO’s bounding box format,and applying data augmentation methods such as flipping,rotation,and cropping to enhance model generalization.The proposed model was trained for 100 epochs with adaptive learning rate methods and hyperparameter optimization for performance improvement,with a mAP@0.5 of 74.93%and a mAP@0.5-0.95 of 64.11%,outperforming previous versions of YOLO(v10,v9,and v8)and general-purpose architectures like ResNet50 and EfficientNet.It exhibited improved precision and recall for all activity classes with high precision values of 0.76 for running,0.79 for swimming,0.80 for sitting,and 0.81 for sleeping,and was tested for real-time deployment with an inference time of 8.9 ms per image,being computationally light.Proposed YOLOv11’s improvements are attributed to architectural advancements like a more complex feature extraction process,better attention modules,and an anchor-free detection mechanism.While YOLOv10 was extremely stable in static activity recognition,YOLOv9 performed well in dynamic environments but suffered from overfitting,and YOLOv8,while being a decent baseline,failed to differentiate between overlapping static activities.The experimental results determine proposed YOLOv11 to be the most appropriate model,providing an ideal balance between accuracy,computational efficiency,and robustness for real-world deployment.Nevertheless,there exist certain issues to be addressed,particularly in discriminating against visually similar activities and the use of publicly available datasets.Future research will entail the inclusion of 3D data and multimodal sensor inputs,such as depth and motion information,for enhancing recognition accuracy and generalizability to challenging real-world environments.
基金fully supported by National Natural Science Foundation of China(61871422)Natural Science Foundation of Sichuan Province(2023NSFSC1422)Central Universities of South west Minzu University(ZYN2022032)。
文摘There are all kinds of unknown and known signals in the actual electromagnetic environment,which hinders the development of practical cognitive radio applications.However,most existing signal recognition models are difficult to discover unknown signals while recognizing known ones.In this paper,a compact manifold mixup feature-based open-set recognition approach(OR-CMMF)is proposed to address the above problem.First,the proposed approach utilizes the center loss to constrain decision boundaries so that it obtains the compact latent signal feature representations and extends the low-confidence feature space.Second,the latent signal feature representations are used to construct synthetic representations as substitutes for unknown categories of signals.Then,these constructed representations can occupy the extended low-confidence space.Finally,the proposed approach applies the distillation loss to adjust the decision boundaries between the known categories signals and the constructed unknown categories substitutes so that it accurately discovers unknown signals.The OR-CMMF approach outperformed other state-of-the-art open-set recognition methods in comprehensive recognition performance and running time,as demonstrated by simulation experiments on two public datasets RML2016.10a and ORACLE.
文摘Molecular recognition of bioreceptors and enzymes relies on orthogonal interactions with small molecules within their cavity. To date, Chinese scientists have developed three types of strategies for introducing active sites inside the cavity of macrocyclic arenes to better mimic molecular recognition of bioreceptors and enzymes.The editorial aims to enlighten scientists in this field when they develop novel macrocycles for molecular recognition, supramolecular assembly, and applications.
基金Supported by The Three Vertical Basic Cultivation Project of Heilongjiang Bayi Agricultural University(ZRCPY202314).
文摘To address the issue of low recognition accuracy for eight types of behaviors including standing,walking,drinking,lying,eating,mounting,fighting and limping in complex multi-cow farm environments,a multi-target cow behavior recognition method based on an improved YOLOv11n algorithm was proposed.The detection capability for small targets in images was enhanced by incorporating a DASI module into the backbone network and a MDCR module into the neck network,based on YOLOv11.The improved YOLOv11 algorithm increased the mean average precision from the original 89.5%to 93%,with particularly notable improvements of 8.7%and 6.3%in the average precision for recognizing drinking and walking behaviors,respectively.These results fully demonstrate that the proposed method enhances the model s ability to recognize cow behaviors.
基金supported by the Zhejiang Provincial Natural Science Foundation of China(No.LQ23F030001)the National Natural Science Foundation of China(No.62406280)+5 种基金the Autism Research Special Fund of Zhejiang Foundation for Disabled Persons(No.2023008)the Liaoning Province Higher Education Innovative Talents Program Support Project(No.LR2019058)the Liaoning Province Joint Open Fund for Key Scientific and Technological Innovation Bases(No.2021-KF-12-05)the Central Guidance on Local Science and Technology Development Fund of Liaoning Province(No.2023JH6/100100066)the Key Laboratory for Biomedical Engineering of Ministry of Education,Zhejiang University,Chinain part by the Open Research Fund of the State Key Laboratory of Cognitive Neuroscience and Learning.
文摘Video action recognition(VAR)aims to analyze dynamic behaviors in videos and achieve semantic understanding.VAR faces challenges such as temporal dynamics,action-scene coupling,and the complexity of human interactions.Existing methods can be categorized into motion-level,event-level,and story-level ones based on spatiotemporal granularity.However,single-modal approaches struggle to capture complex behavioral semantics and human factors.Therefore,in recent years,vision-language models(VLMs)have been introduced into this field,providing new research perspectives for VAR.In this paper,we systematically review spatiotemporal hierarchical methods in VAR and explore how the introduction of large models has advanced the field.Additionally,we propose the concept of“Factor”to identify and integrate key information from both visual and textual modalities,enhancing multimodal alignment.We also summarize various multimodal alignment methods and provide in-depth analysis and insights into future research directions.
基金supported by the National Natural Science Foundation of China(No.62376197)the Tianjin Science and Technology Program(No.23JCYBJC00360)the Tianjin Health Research Project(No.TJWJ2025MS045).
文摘Accessible communication based on sign language recognition(SLR)is the key to emergency medical assistance for the hearing-impaired community.Balancing the capture of both local and global information in SLR for emergency medicine poses a significant challenge.To address this,we propose a novel approach based on the inter-learning of visual features between global and local information.Specifically,our method enhances the perception capabilities of the visual feature extractor by strategically leveraging the strengths of convolutional neural network(CNN),which are adept at capturing local features,and visual transformers which perform well at perceiving global features.Furthermore,to mitigate the issue of overfitting caused by the limited availability of sign language data for emergency medical applications,we introduce an enhanced short temporal module for data augmentation through additional subsequences.Experimental results on three publicly available sign language datasets demonstrate the efficacy of the proposed approach.
文摘Face recognition has emerged as one of the most prominent applications of image analysis and under-standing,gaining considerable attention in recent years.This growing interest is driven by two key factors:its extensive applications in law enforcement and the commercial domain,and the rapid advancement of practical technologies.Despite the significant advancements,modern recognition algorithms still struggle in real-world conditions such as varying lighting conditions,occlusion,and diverse facial postures.In such scenarios,human perception is still well above the capabilities of present technology.Using the systematic mapping study,this paper presents an in-depth review of face detection algorithms and face recognition algorithms,presenting a detailed survey of advancements made between 2015 and 2024.We analyze key methodologies,highlighting their strengths and restrictions in the application context.Additionally,we examine various datasets used for face detection/recognition datasets focusing on the task-specific applications,size,diversity,and complexity.By analyzing these algorithms and datasets,this survey works as a valuable resource for researchers,identifying the research gap in the field of face detection and recognition and outlining potential directions for future research.
基金supported by the National Natural Science Foundation of China(No.62066041).
文摘Convolutional neural networks(CNNs)exhibit superior performance in image feature extraction,making them extensively used in the area of traffic sign recognition.However,the design of existing traffic sign recognition algorithms often relies on expert knowledge to enhance the image feature extraction networks,necessitating image preprocessing and model parameter tuning.This increases the complexity of the model design process.This study introduces an evolutionary neural architecture search(ENAS)algorithm for the automatic design of neural network models tailored for traffic sign recognition.By integrating the construction parameters of residual network(ResNet)into evolutionary algorithms(EAs),we automatically generate lightweight networks for traffic sign recognition,utilizing blocks as the fundamental building units.Experimental evaluations on the German traffic sign recognition benchmark(GTSRB)dataset reveal that the algorithm attains a recognition accuracy of 99.32%,with a mere 2.8×10^(6)parameters.Experimental results comparing the proposed method with other traffic sign recognition algorithms demonstrate that the method can more efficiently discover neural network architectures,significantly reducing the number of network parameters while maintaining recognition accuracy.
基金supported by China Academy of Railway Sciences Corporation Limited(No.2021YJ127).
文摘Artificial intelligence,such as deep learning technology,has advanced the study of facial expression recognition since facial expression carries rich emotional information and is significant for many naturalistic situations.To pursue a high facial expression recognition accuracy,the network model of deep learning is generally designed to be very deep while the model’s real-time performance is typically constrained and limited.With MobileNetV3,a lightweight model with a good accuracy,a further study is conducted by adding a basic ResNet module to each of its existing modules and an SSH(Single Stage Headless Face Detector)context module to expand the model’s perceptual field.In this article,the enhanced model named Res-MobileNetV3,could alleviate the subpar of real-time performance and compress the size of large network models,which can process information at a rate of up to 33 frames per second.Although the improved model has been verified to be slightly inferior to the current state-of-the-art method in aspect of accuracy rate on the publically available face expression datasets,it can bring a good balance on accuracy,real-time performance,model size and model complexity in practical applications.
基金supported by the Beijing Natural Science Foundation (5252014)the National Natural Science Foundation of China (62303063)。
文摘Bird vocalizations are pivotal for ecological monitoring,providing insights into biodiversity and ecosystem health.Traditional recognition methods often neglect phase information,resulting in incomplete feature representation.In this paper,we introduce a novel approach to bird vocalization recognition(BVR)that integrates both amplitude and phase information,leading to enhanced species identification.We propose MHARes Net,a deep learning(DL)model that employs residual blocks and a multi-head attention mechanism to capture salient features from logarithmic power(POW),Instantaneous Frequency(IF),and Group Delay(GD)extracted from bird vocalizations.Experiments on three bird vocalization datasets demonstrate our method's superior performance,achieving accuracy rates of 94%,98.9%,and 87.1%respectively.These results indicate that our approach provides a more effective representation of bird vocalizations,outperforming existing methods.This integration of phase information in BVR is innovative and significantly advances the field of automatic bird monitoring technology,offering valuable tools for ecological research and conservation efforts.