Because the hydraulic directional valve usually works in a bad working environment and is disturbed by multi-factor noise,the traditional single sensor monitoring technology is difficult to use for an accurate diagnos...Because the hydraulic directional valve usually works in a bad working environment and is disturbed by multi-factor noise,the traditional single sensor monitoring technology is difficult to use for an accurate diagnosis of it.Therefore,a fault diagnosis method based on multi-sensor information fusion is proposed in this paper to reduce the inaccuracy and uncertainty of traditional single sensor information diagnosis technology and to realize accurate monitoring for the location or diagnosis of early faults in such valves in noisy environments.Firstly,the statistical features of signals collected by the multi-sensor are extracted and the depth features are obtained by a convolutional neural network(CNN)to form a complete and stable multi-dimensional feature set.Secondly,to obtain a weighted multi-dimensional feature set,the multi-dimensional feature sets of similar sensors are combined,and the entropy weight method is used to weight these features to reduce the interference of insensitive features.Finally,the attention mechanism is introduced to improve the dual-channel CNN,which is used to adaptively fuse the weighted multi-dimensional feature sets of heterogeneous sensors,to flexibly select heterogeneous sensor information so as to achieve an accurate diagnosis.Experimental results show that the weighted multi-dimensional feature set obtained by the proposed method has a high fault-representation ability and low information redundancy.It can diagnose simultaneously internal wear faults of the hydraulic directional valve and electromagnetic faults of actuators that are difficult to diagnose by traditional methods.This proposed method can achieve high fault-diagnosis accuracy under severe working conditions.展开更多
In the article“A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion”by Khadija Manzoor,Fiaz Majeed,Ansar Siddique,Talha Meraj,Hafiz Tayyab Rauf,Mohammed A.El-Meligy,Mohamed Sharaf,Abd Ela...In the article“A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion”by Khadija Manzoor,Fiaz Majeed,Ansar Siddique,Talha Meraj,Hafiz Tayyab Rauf,Mohammed A.El-Meligy,Mohamed Sharaf,Abd Elatty E.Abd Elgawad Computers,Materials&Continua,2022,Vol.70,No.1,pp.1617–1630.DOI:10.32604/cmc.2022.018621,URL:https://www.techscience.com/cmc/v70n1/44361,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”.展开更多
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f...The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.展开更多
Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework...Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques.展开更多
Image classification based on bag-of-words(BOW)has a broad application prospect in pattern recognition field but the shortcomings such as single feature and low classification accuracy are apparent.To deal with this...Image classification based on bag-of-words(BOW)has a broad application prospect in pattern recognition field but the shortcomings such as single feature and low classification accuracy are apparent.To deal with this problem,this paper proposes to combine two ingredients:(i)Three features with functions of mutual complementation are adopted to describe the images,including pyramid histogram of words(PHOW),pyramid histogram of color(PHOC)and pyramid histogram of orientated gradients(PHOG).(ii)An adaptive feature-weight adjusted image categorization algorithm based on the SVM and the decision level fusion of multiple features are employed.Experiments are carried out on the Caltech101 database,which confirms the validity of the proposed approach.The experimental results show that the classification accuracy rate of the proposed method is improved by 7%-14%higher than that of the traditional BOW methods.With full utilization of global,local and spatial information,the algorithm is much more complete and flexible to describe the feature information of the image through the multi-feature fusion and the pyramid structure composed by image spatial multi-resolution decomposition.Significant improvements to the classification accuracy are achieved as the result.展开更多
Human Action Recognition(HAR)is an active research topic in machine learning for the last few decades.Visual surveillance,robotics,and pedestrian detection are the main applications for action recognition.Computer vis...Human Action Recognition(HAR)is an active research topic in machine learning for the last few decades.Visual surveillance,robotics,and pedestrian detection are the main applications for action recognition.Computer vision researchers have introduced many HAR techniques,but they still face challenges such as redundant features and the cost of computing.In this article,we proposed a new method for the use of deep learning for HAR.In the proposed method,video frames are initially pre-processed using a global contrast approach and later used to train a deep learning model using domain transfer learning.The Resnet-50 Pre-Trained Model is used as a deep learning model in this work.Features are extracted from two layers:Global Average Pool(GAP)and Fully Connected(FC).The features of both layers are fused by the Canonical Correlation Analysis(CCA).Then features are selected using the Shanon Entropy-based threshold function.The selected features are finally passed to multiple classifiers for final classification.Experiments are conducted on five publicly available datasets as IXMAS,UCF Sports,YouTube,UT-Interaction,and KTH.The accuracy of these data sets was 89.6%,99.7%,100%,96.7%and 96.6%,respectively.Comparison with existing techniques has shown that the proposed method provides improved accuracy for HAR.Also,the proposed method is computationally fast based on the time of execution.展开更多
An improved model based on you only look once version 8(YOLOv8)is proposed to solve the problem of low detection accuracy due to the diversity of object sizes in optical remote sensing images.Firstly,the feature pyram...An improved model based on you only look once version 8(YOLOv8)is proposed to solve the problem of low detection accuracy due to the diversity of object sizes in optical remote sensing images.Firstly,the feature pyramid network(FPN)structure of the original YOLOv8 mode is replaced by the generalized-FPN(GFPN)structure in GiraffeDet to realize the"cross-layer"and"cross-scale"adaptive feature fusion,to enrich the semantic information and spatial information on the feature map to improve the target detection ability of the model.Secondly,a pyramid-pool module of multi atrous spatial pyramid pooling(MASPP)is designed by using the idea of atrous convolution and feature pyramid structure to extract multi-scale features,so as to improve the processing ability of the model for multi-scale objects.The experimental results show that the detection accuracy of the improved YOLOv8 model on DIOR dataset is 92%and mean average precision(mAP)is 87.9%,respectively 3.5%and 1.7%higher than those of the original model.It is proved the detection and classification ability of the proposed model on multi-dimensional optical remote sensing target has been improved.展开更多
Solar cell defect detection is crucial for quality inspection in photovoltaic power generation modules.In the production process,defect samples occur infrequently and exhibit random shapes and sizes,which makes it cha...Solar cell defect detection is crucial for quality inspection in photovoltaic power generation modules.In the production process,defect samples occur infrequently and exhibit random shapes and sizes,which makes it challenging to collect defective samples.Additionally,the complex surface background of polysilicon cell wafers complicates the accurate identification and localization of defective regions.This paper proposes a novel Lightweight Multiscale Feature Fusion network(LMFF)to address these challenges.The network comprises a feature extraction network,a multi-scale feature fusion module(MFF),and a segmentation network.Specifically,a feature extraction network is proposed to obtain multi-scale feature outputs,and a multi-scale feature fusion module(MFF)is used to fuse multi-scale feature information effectively.In order to capture finer-grained multi-scale information from the fusion features,we propose a multi-scale attention module(MSA)in the segmentation network to enhance the network’s ability for small target detection.Moreover,depthwise separable convolutions are introduced to construct depthwise separable residual blocks(DSR)to reduce the model’s parameter number.Finally,to validate the proposed method’s defect segmentation and localization performance,we constructed three solar cell defect detection datasets:SolarCells,SolarCells-S,and PVEL-S.SolarCells and SolarCells-S are monocrystalline silicon datasets,and PVEL-S is a polycrystalline silicon dataset.Experimental results show that the IOU of our method on these three datasets can reach 68.5%,51.0%,and 92.7%,respectively,and the F1-Score can reach 81.3%,67.5%,and 96.2%,respectively,which surpasses other commonly usedmethods and verifies the effectiveness of our LMFF network.展开更多
This paper presents a robust face recognition algorithm by using transform domain-based multiple feature fusion and lin- ear regression. Transform domain-based feature fusion can provide comprehensive face information...This paper presents a robust face recognition algorithm by using transform domain-based multiple feature fusion and lin- ear regression. Transform domain-based feature fusion can provide comprehensive face information for recognition, and decrease the effect of variations in illumination and pose. The holistic feature and local feature are extracted by discrete cosine transform and Gabor wavelet transform, respectively. Then the extracted holistic features and the local features are fused by weighted sum. The fused feature values are finally sent to linear regression classifier for recognition. The algorithm is evaluated on AR, ORL and Yale B face databases. Experiment results show that our proposed algo- rithm could be more robust than those single feature-based algo- rithms under pose and expression variations.展开更多
Designing detection algorithms with high efficiency for Synthetic Aperture Radar(SAR) imagery is essential for the operator SAR Automatic Target Recognition(ATR) system.This work abandons the detection strategy of vis...Designing detection algorithms with high efficiency for Synthetic Aperture Radar(SAR) imagery is essential for the operator SAR Automatic Target Recognition(ATR) system.This work abandons the detection strategy of visiting every pixel in SAR imagery as done in many traditional detection algorithms,and introduces the gridding and fusion idea of different texture fea-tures to realize fast target detection.It first grids the original SAR imagery,yielding a set of grids to be classified into clutter grids and target grids,and then calculates the texture features in each grid.By fusing the calculation results,the target grids containing potential maneuvering targets are determined.The dual threshold segmentation technique is imposed on target grids to obtain the regions of interest.The fused texture features,including local statistics features and Gray-Level Co-occurrence Matrix(GLCM),are investigated.The efficiency and superiority of our proposed algorithm were tested and verified by comparing with existing fast de-tection algorithms using real SAR data.The results obtained from the experiments indicate the promising practical application val-ue of our study.展开更多
The rapid growth of multimedia content necessitates powerful technologies to filter, classify, index and retrieve video documents more efficiently. However, the essential bottleneck of image and video analysis is the ...The rapid growth of multimedia content necessitates powerful technologies to filter, classify, index and retrieve video documents more efficiently. However, the essential bottleneck of image and video analysis is the problem of semantic gap that low level features extracted by computers always fail to coincide with high-level concepts interpreted by humans. In this paper, we present a generic scheme for the detection video semantic concepts based on multiple visual features machine learning. Various global and local low-level visual features are systelrtically investigated, and kernelbased learning method equips the concept detection system to explore the potential of these features. Then we combine the different features and sub-systen on both classifier-level and kernel-level fusion that contribute to a more robust system Our proposed system is tested on the TRECVID dataset. The resulted Mean Average Precision (MAP) score is rmch better than the benchmark perforrmnce, which proves that our concepts detection engine develops a generic model and perforrrs well on both object and scene type concepts.展开更多
Real-time detection of driver fatigue status is of great significance for road traffic safety.In this paper,a proposed novel driver fatigue detection method is able to detect the driver’s fatigue status around the cl...Real-time detection of driver fatigue status is of great significance for road traffic safety.In this paper,a proposed novel driver fatigue detection method is able to detect the driver’s fatigue status around the clock.The driver’s face images were captured by a camera with a colored lens and an infrared lens mounted above the dashboard.The landmarks of the driver’s face were labeled and the eye-area was segmented.By calculating the aspect ratios of the eyes,the duration of eye closure,frequency of blinks and PERCLOS of both colored and infrared,fatigue can be detected.Based on the change of light intensity detected by a photosensitive device,the weight matrix of the colored features and the infrared features was adjusted adaptively to reduce the impact of lighting on fatigue detection.Video samples of the driver’s face were recorded in the test vehicle.After training the classification model,the results showed that our method has high accuracy on driver fatigue detection in both daytime and nighttime.展开更多
Fabric retrieval is very challenging since problems like viewpoint variations,illumination changes,blots,and poor image qualities are usually encountered in fabric images.In this work,a novel deep feature nonlinear fu...Fabric retrieval is very challenging since problems like viewpoint variations,illumination changes,blots,and poor image qualities are usually encountered in fabric images.In this work,a novel deep feature nonlinear fusion network(DFNFN)is proposed to nonlinearly fuse features learned from RGB and texture images for improving fabric retrieval.Texture images are obtained by using local binary pattern texture(LBP-Texture)features to describe RGB fabric images.The DFNFN firstly applies two feature learning branches to deal with RGB images and the corresponding LBP-Texture images simultaneously.Each branch contains the same convolutional neural network(CNN)architecture but independently learning parameters.Then,a nonlinear fusion module(NFM)is designed to concatenate the features produced by the two branches and nonlinearly fuse the concatenated features via a convolutional layer followed with a rectified linear unit(ReLU).The NFM is flexible since it can be embedded in different depths of the DFNFN to find the best fusion position.Consequently,DFNFN can optimally fuse features learned from RGB and LBP-Texture images to boost the retrieval accuracy.Extensive experiments on the Fabric 1.0 dataset show that the proposed method is superior to many state-of-the-art methods.展开更多
Regular inspection of bridge cracks is crucial to bridge maintenance and repair.The traditional manual crack detection methods are timeconsuming,dangerous and subjective.At the same time,for the existing mainstream vi...Regular inspection of bridge cracks is crucial to bridge maintenance and repair.The traditional manual crack detection methods are timeconsuming,dangerous and subjective.At the same time,for the existing mainstream vision-based automatic crack detection algorithms,it is challenging to detect fine cracks and balance the detection accuracy and speed.Therefore,this paper proposes a new bridge crack segmentationmethod based on parallel attention mechanism and multi-scale features fusion on top of the DeeplabV3+network framework.First,the improved lightweight MobileNetv2 network and dilated separable convolution are integrated into the original DeeplabV3+network to improve the original backbone network Xception and atrous spatial pyramid pooling(ASPP)module,respectively,dramatically reducing the number of parameters in the network and accelerates the training and prediction speed of the model.Moreover,we introduce the parallel attention mechanism into the encoding and decoding stages.The attention to the crack regions can be enhanced from the aspects of both channel and spatial parts and significantly suppress the interference of various noises.Finally,we further improve the detection performance of the model for fine cracks by introducing a multi-scale features fusion module.Our research results are validated on the self-made dataset.The experiments show that our method is more accurate than other methods.Its intersection of union(IoU)and F1-score(F1)are increased to 77.96%and 87.57%,respectively.In addition,the number of parameters is only 4.10M,which is much smaller than the original network;also,the frames per second(FPS)is increased to 15 frames/s.The results prove that the proposed method fits well the requirements of rapid and accurate detection of bridge cracks and is superior to other methods.展开更多
Collective improvement in the acceptable or desirable accuracy level of breast cancer image-related pattern recognition using various schemes remains challenging.Despite the combination of multiple schemes to achieve ...Collective improvement in the acceptable or desirable accuracy level of breast cancer image-related pattern recognition using various schemes remains challenging.Despite the combination of multiple schemes to achieve superior ultrasound image pattern recognition by reducing the speckle noise,an enhanced technique is not achieved.The purpose of this study is to introduce a features-based fusion scheme based on enhancement uniform-Local Binary Pattern(LBP)and filtered noise reduction.To surmount the above limitations and achieve the aim of the study,a new descriptor that enhances the LBP features based on the new threshold has been proposed.This paper proposes a multi-level fusion scheme for the auto-classification of the static ultrasound images of breast cancer,which was attained in two stages.First,several images were generated from a single image using the pre-processing method.Themedian andWiener filterswere utilized to lessen the speckle noise and enhance the ultrasound image texture.This strategy allowed the extraction of a powerful feature by reducing the overlap between the benign and malignant image classes.Second,the fusion mechanism allowed the production of diverse features from different filtered images.The feasibility of using the LBP-based texture feature to categorize the ultrasound images was demonstrated.The effectiveness of the proposed scheme is tested on 250 ultrasound images comprising 100 and 150 benign and malignant images,respectively.The proposed method achieved very high accuracy(98%),sensitivity(98%),and specificity(99%).As a result,the fusion process that can help achieve a powerful decision based on different features produced from different filtered images improved the results of the new descriptor of LBP features in terms of accuracy,sensitivity,and specificity.展开更多
Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely us...Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely used in pulmonary disease diagnosis,such as pneumonia and tuberculosis.However,traditional feature fusion methods often suffer from feature disparity,information loss,redundancy,and increased complexity,hindering the further extension of DL algorithms.To solve this problem,we propose a Graph-Convolution Fusion Network with Self-Supervised Feature Alignment(Self-FAGCFN)to address the limitations of traditional feature fusion methods in deep learning-based medical image classification for respiratory diseases such as pneumonia and tuberculosis.The network integrates Convolutional Neural Networks(CNNs)for robust feature extraction from two-dimensional grid structures and Graph Convolutional Networks(GCNs)within a Graph Neural Network branch to capture features based on graph structure,focusing on significant node representations.Additionally,an Attention-Embedding Ensemble Block is included to capture critical features from GCN outputs.To ensure effective feature alignment between pre-and post-fusion stages,we introduce a feature alignment loss that minimizes disparities.Moreover,to address the limitations of proposed methods,such as inappropriate centroid discrepancies during feature alignment and class imbalance in the dataset,we develop a Feature-Centroid Fusion(FCF)strategy and a Multi-Level Feature-Centroid Update(MLFCU)algorithm,respectively.Extensive experiments on public datasets LungVision and Chest-Xray demonstrate that the Self-FAGCFN model significantly outperforms existing methods in diagnosing pneumonia and tuberculosis,highlighting its potential for practical medical applications.展开更多
Globally,diabetic retinopathy(DR)is the primary cause of blindness,affecting millions of people worldwide.This widespread impact underscores the critical need for reliable and precise diagnostic techniques to ensure p...Globally,diabetic retinopathy(DR)is the primary cause of blindness,affecting millions of people worldwide.This widespread impact underscores the critical need for reliable and precise diagnostic techniques to ensure prompt diagnosis and effective treatment.Deep learning-based automated diagnosis for diabetic retinopathy can facilitate early detection and treatment.However,traditional deep learning models that focus on local views often learn feature representations that are less discriminative at the semantic level.On the other hand,models that focus on global semantic-level information might overlook critical,subtle local pathological features.To address this issue,we propose an adaptive multi-scale feature fusion network called(AMSFuse),which can adaptively combine multi-scale global and local features without compromising their individual representation.Specifically,our model incorporates global features for extracting high-level contextual information from retinal images.Concurrently,local features capture fine-grained details,such as microaneurysms,hemorrhages,and exudates,which are critical for DR diagnosis.These global and local features are adaptively fused using a fusion block,followed by an Integrated Attention Mechanism(IAM)that refines the fused features by emphasizing relevant regions,thereby enhancing classification accuracy for DR classification.Our model achieves 86.3%accuracy on the APTOS dataset and 96.6%RFMiD,both of which are comparable to state-of-the-art methods.展开更多
In order to address the challenges encountered in visual navigation for asteroid landing using traditional point features,such as significant recognition and extraction errors,low computational efficiency,and limited ...In order to address the challenges encountered in visual navigation for asteroid landing using traditional point features,such as significant recognition and extraction errors,low computational efficiency,and limited navigation accuracy,a novel approach for multi-type fusion visual navigation is proposed.This method aims to overcome the limitations of single-type features and enhance navigation accuracy.Analytical criteria for selecting multi-type features are introduced,which simultaneously improve computational efficiency and system navigation accuracy.Concerning pose estimation,both absolute and relative pose estimation methods based on multi-type feature fusion are proposed,and multi-type feature normalization is established,which significantly improves system navigation accuracy and lays the groundwork for flexible application of joint absolute-relative estimation.The feasibility and effectiveness of the proposed method are validated through simulation experiments through 4769 Castalia.展开更多
Multi-Object Tracking(MOT)represents a fundamental but computationally demanding task in computer vision,with particular challenges arising in occluded and densely populated environments.While contemporary tracking sy...Multi-Object Tracking(MOT)represents a fundamental but computationally demanding task in computer vision,with particular challenges arising in occluded and densely populated environments.While contemporary tracking systems have demonstrated considerable progress,persistent limitations—notably frequent occlusion-induced identity switches and tracking inaccuracies—continue to impede reliable real-world deployment.This work introduces an advanced tracking framework that enhances association robustness through a two-stage matching paradigm combining spatial and appearance features.Proposed framework employs:(1)a Height Modulated and Scale Adaptive Spatial Intersection-over-Union(HMSIoU)metric for improved spatial correspondence estimation across variable object scales and partial occlusions;(2)a feature extraction module generating discriminative appearance descriptors for identity maintenance;and(3)a recovery association mechanism for refining matches between unassociated tracks and detections.Comprehensive evaluation on standard MOT17 and MOT20 benchmarks demonstrates significant improvements in tracking consistency,with state-of-the-art performance across key metrics including HOTA(64),MOTA(80.7),IDF1(79.8),and IDs(1379).These results substantiate the efficacy of our Cue-Tracker framework in complex real-world scenarios characterized by occlusions and crowd interactions.展开更多
Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feat...Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.展开更多
基金supported by the National Natural Science Foundation of China(Nos.51805376 and U1709208)the Zhejiang Provincial Natural Science Foundation of China(Nos.LY20E050028 and LD21E050001)。
文摘Because the hydraulic directional valve usually works in a bad working environment and is disturbed by multi-factor noise,the traditional single sensor monitoring technology is difficult to use for an accurate diagnosis of it.Therefore,a fault diagnosis method based on multi-sensor information fusion is proposed in this paper to reduce the inaccuracy and uncertainty of traditional single sensor information diagnosis technology and to realize accurate monitoring for the location or diagnosis of early faults in such valves in noisy environments.Firstly,the statistical features of signals collected by the multi-sensor are extracted and the depth features are obtained by a convolutional neural network(CNN)to form a complete and stable multi-dimensional feature set.Secondly,to obtain a weighted multi-dimensional feature set,the multi-dimensional feature sets of similar sensors are combined,and the entropy weight method is used to weight these features to reduce the interference of insensitive features.Finally,the attention mechanism is introduced to improve the dual-channel CNN,which is used to adaptively fuse the weighted multi-dimensional feature sets of heterogeneous sensors,to flexibly select heterogeneous sensor information so as to achieve an accurate diagnosis.Experimental results show that the weighted multi-dimensional feature set obtained by the proposed method has a high fault-representation ability and low information redundancy.It can diagnose simultaneously internal wear faults of the hydraulic directional valve and electromagnetic faults of actuators that are difficult to diagnose by traditional methods.This proposed method can achieve high fault-diagnosis accuracy under severe working conditions.
文摘In the article“A Lightweight Approach for Skin Lesion Detection through Optimal Features Fusion”by Khadija Manzoor,Fiaz Majeed,Ansar Siddique,Talha Meraj,Hafiz Tayyab Rauf,Mohammed A.El-Meligy,Mohamed Sharaf,Abd Elatty E.Abd Elgawad Computers,Materials&Continua,2022,Vol.70,No.1,pp.1617–1630.DOI:10.32604/cmc.2022.018621,URL:https://www.techscience.com/cmc/v70n1/44361,there was an error regarding the affiliation for the author Hafiz Tayyab Rauf.Instead of“Centre for Smart Systems,AI and Cybersecurity,Staffordshire University,Stoke-on-Trent,UK”,the affiliation should be“Independent Researcher,Bradford,BD80HS,UK”.
基金Supported by the Henan Province Key Research and Development Project(231111211300)the Central Government of Henan Province Guides Local Science and Technology Development Funds(Z20231811005)+2 种基金Henan Province Key Research and Development Project(231111110100)Henan Provincial Outstanding Foreign Scientist Studio(GZS2024006)Henan Provincial Joint Fund for Scientific and Technological Research and Development Plan(Application and Overcoming Technical Barriers)(242103810028)。
文摘The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.
基金King Saud University,Grant/Award Number:RSP2024R157。
文摘Biometric characteristics are playing a vital role in security for the last few years.Human gait classification in video sequences is an important biometrics attribute and is used for security purposes.A new framework for human gait classification in video sequences using deep learning(DL)fusion assisted and posterior probability-based moth flames optimization(MFO)is proposed.In the first step,the video frames are resized and finetuned by two pre-trained lightweight DL models,EfficientNetB0 and MobileNetV2.Both models are selected based on the top-5 accuracy and less number of parameters.Later,both models are trained through deep transfer learning and extracted deep features fused using a voting scheme.In the last step,the authors develop a posterior probabilitybased MFO feature selection algorithm to select the best features.The selected features are classified using several supervised learning methods.The CASIA-B publicly available dataset has been employed for the experimental process.On this dataset,the authors selected six angles such as 0°,18°,90°,108°,162°,and 180°and obtained an average accuracy of 96.9%,95.7%,86.8%,90.0%,95.1%,and 99.7%.Results demonstrate comparable improvement in accuracy and significantly minimize the computational time with recent state-of-the-art techniques.
基金Supported by Foundation for Innovative Research Groups of the National Natural Science Foundation of China(61321002)Projects of Major International(Regional)Jiont Research Program NSFC(61120106010)+1 种基金Beijing Education Committee Cooperation Building Foundation ProjectProgram for Changjiang Scholars and Innovative Research Team in University(IRT1208)
文摘Image classification based on bag-of-words(BOW)has a broad application prospect in pattern recognition field but the shortcomings such as single feature and low classification accuracy are apparent.To deal with this problem,this paper proposes to combine two ingredients:(i)Three features with functions of mutual complementation are adopted to describe the images,including pyramid histogram of words(PHOW),pyramid histogram of color(PHOC)and pyramid histogram of orientated gradients(PHOG).(ii)An adaptive feature-weight adjusted image categorization algorithm based on the SVM and the decision level fusion of multiple features are employed.Experiments are carried out on the Caltech101 database,which confirms the validity of the proposed approach.The experimental results show that the classification accuracy rate of the proposed method is improved by 7%-14%higher than that of the traditional BOW methods.With full utilization of global,local and spatial information,the algorithm is much more complete and flexible to describe the feature information of the image through the multi-feature fusion and the pyramid structure composed by image spatial multi-resolution decomposition.Significant improvements to the classification accuracy are achieved as the result.
基金This research was supported by Korea Institute for Advancement of Technology(KIAT)grant funded by the Korea Government(MOTIE)(P0012724,The Competency Development Program for Industry Specialist)and the Soonchunhyang University Research Fund.
文摘Human Action Recognition(HAR)is an active research topic in machine learning for the last few decades.Visual surveillance,robotics,and pedestrian detection are the main applications for action recognition.Computer vision researchers have introduced many HAR techniques,but they still face challenges such as redundant features and the cost of computing.In this article,we proposed a new method for the use of deep learning for HAR.In the proposed method,video frames are initially pre-processed using a global contrast approach and later used to train a deep learning model using domain transfer learning.The Resnet-50 Pre-Trained Model is used as a deep learning model in this work.Features are extracted from two layers:Global Average Pool(GAP)and Fully Connected(FC).The features of both layers are fused by the Canonical Correlation Analysis(CCA).Then features are selected using the Shanon Entropy-based threshold function.The selected features are finally passed to multiple classifiers for final classification.Experiments are conducted on five publicly available datasets as IXMAS,UCF Sports,YouTube,UT-Interaction,and KTH.The accuracy of these data sets was 89.6%,99.7%,100%,96.7%and 96.6%,respectively.Comparison with existing techniques has shown that the proposed method provides improved accuracy for HAR.Also,the proposed method is computationally fast based on the time of execution.
基金supported by the National Natural Science Foundation of China(No.62241109)the Tianjin Science and Technology Commissioner Project(No.20YDTPJC01110)。
文摘An improved model based on you only look once version 8(YOLOv8)is proposed to solve the problem of low detection accuracy due to the diversity of object sizes in optical remote sensing images.Firstly,the feature pyramid network(FPN)structure of the original YOLOv8 mode is replaced by the generalized-FPN(GFPN)structure in GiraffeDet to realize the"cross-layer"and"cross-scale"adaptive feature fusion,to enrich the semantic information and spatial information on the feature map to improve the target detection ability of the model.Secondly,a pyramid-pool module of multi atrous spatial pyramid pooling(MASPP)is designed by using the idea of atrous convolution and feature pyramid structure to extract multi-scale features,so as to improve the processing ability of the model for multi-scale objects.The experimental results show that the detection accuracy of the improved YOLOv8 model on DIOR dataset is 92%and mean average precision(mAP)is 87.9%,respectively 3.5%and 1.7%higher than those of the original model.It is proved the detection and classification ability of the proposed model on multi-dimensional optical remote sensing target has been improved.
基金supported in part by the National Natural Science Foundation of China under Grants 62463002,62062021 and 62473033in part by the Guiyang Scientific Plan Project[2023]48–11,in part by QKHZYD[2023]010 Guizhou Province Science and Technology Innovation Base Construction Project“Key Laboratory Construction of Intelligent Mountain Agricultural Equipment”.
文摘Solar cell defect detection is crucial for quality inspection in photovoltaic power generation modules.In the production process,defect samples occur infrequently and exhibit random shapes and sizes,which makes it challenging to collect defective samples.Additionally,the complex surface background of polysilicon cell wafers complicates the accurate identification and localization of defective regions.This paper proposes a novel Lightweight Multiscale Feature Fusion network(LMFF)to address these challenges.The network comprises a feature extraction network,a multi-scale feature fusion module(MFF),and a segmentation network.Specifically,a feature extraction network is proposed to obtain multi-scale feature outputs,and a multi-scale feature fusion module(MFF)is used to fuse multi-scale feature information effectively.In order to capture finer-grained multi-scale information from the fusion features,we propose a multi-scale attention module(MSA)in the segmentation network to enhance the network’s ability for small target detection.Moreover,depthwise separable convolutions are introduced to construct depthwise separable residual blocks(DSR)to reduce the model’s parameter number.Finally,to validate the proposed method’s defect segmentation and localization performance,we constructed three solar cell defect detection datasets:SolarCells,SolarCells-S,and PVEL-S.SolarCells and SolarCells-S are monocrystalline silicon datasets,and PVEL-S is a polycrystalline silicon dataset.Experimental results show that the IOU of our method on these three datasets can reach 68.5%,51.0%,and 92.7%,respectively,and the F1-Score can reach 81.3%,67.5%,and 96.2%,respectively,which surpasses other commonly usedmethods and verifies the effectiveness of our LMFF network.
基金Supported by the National Science Foundation of China(60972081)Hubei Natural Science Foundation of China(2013CFC118,2009CDA139)+3 种基金Special Funds for Shenzhen Strategic New Industry Development(JCYJ 20120616135936123)Special Project on the Integration of Industry,Education and Research of Ministry of Education of Guangdong Province(2011B090400477)Special Project on the Integration of Industry,Education and Research of Zhuhai City(2011A050101005,2012D0501990016)Zhuhai Key Laboratory Program for Science and Technique(2012D050 1990026)
文摘This paper presents a robust face recognition algorithm by using transform domain-based multiple feature fusion and lin- ear regression. Transform domain-based feature fusion can provide comprehensive face information for recognition, and decrease the effect of variations in illumination and pose. The holistic feature and local feature are extracted by discrete cosine transform and Gabor wavelet transform, respectively. Then the extracted holistic features and the local features are fused by weighted sum. The fused feature values are finally sent to linear regression classifier for recognition. The algorithm is evaluated on AR, ORL and Yale B face databases. Experiment results show that our proposed algo- rithm could be more robust than those single feature-based algo- rithms under pose and expression variations.
基金Supported by the National Natural Science Foundation of China (No. 61032001, No.61002045)
文摘Designing detection algorithms with high efficiency for Synthetic Aperture Radar(SAR) imagery is essential for the operator SAR Automatic Target Recognition(ATR) system.This work abandons the detection strategy of visiting every pixel in SAR imagery as done in many traditional detection algorithms,and introduces the gridding and fusion idea of different texture fea-tures to realize fast target detection.It first grids the original SAR imagery,yielding a set of grids to be classified into clutter grids and target grids,and then calculates the texture features in each grid.By fusing the calculation results,the target grids containing potential maneuvering targets are determined.The dual threshold segmentation technique is imposed on target grids to obtain the regions of interest.The fused texture features,including local statistics features and Gray-Level Co-occurrence Matrix(GLCM),are investigated.The efficiency and superiority of our proposed algorithm were tested and verified by comparing with existing fast de-tection algorithms using real SAR data.The results obtained from the experiments indicate the promising practical application val-ue of our study.
基金Acknowledgements This paper was supported by the coUabomtive Research Project SEV under Cant No. 01100474 between Beijing University of Posts and Telecorrrcnications and France Telecom R&D Beijing the National Natural Science Foundation of China under Cant No. 90920001 the Caduate Innovation Fund of SICE, BUPT, 2011.
文摘The rapid growth of multimedia content necessitates powerful technologies to filter, classify, index and retrieve video documents more efficiently. However, the essential bottleneck of image and video analysis is the problem of semantic gap that low level features extracted by computers always fail to coincide with high-level concepts interpreted by humans. In this paper, we present a generic scheme for the detection video semantic concepts based on multiple visual features machine learning. Various global and local low-level visual features are systelrtically investigated, and kernelbased learning method equips the concept detection system to explore the potential of these features. Then we combine the different features and sub-systen on both classifier-level and kernel-level fusion that contribute to a more robust system Our proposed system is tested on the TRECVID dataset. The resulted Mean Average Precision (MAP) score is rmch better than the benchmark perforrmnce, which proves that our concepts detection engine develops a generic model and perforrrs well on both object and scene type concepts.
基金The work of this paper was supported by the National Natural Science Foundation of China under grant numbers 61572038 received by J.Z.in 2015.URL:https://isisn.nsfc.gov.cn/egrantindex/funcindex/prjsearch-list。
文摘Real-time detection of driver fatigue status is of great significance for road traffic safety.In this paper,a proposed novel driver fatigue detection method is able to detect the driver’s fatigue status around the clock.The driver’s face images were captured by a camera with a colored lens and an infrared lens mounted above the dashboard.The landmarks of the driver’s face were labeled and the eye-area was segmented.By calculating the aspect ratios of the eyes,the duration of eye closure,frequency of blinks and PERCLOS of both colored and infrared,fatigue can be detected.Based on the change of light intensity detected by a photosensitive device,the weight matrix of the colored features and the infrared features was adjusted adaptively to reduce the impact of lighting on fatigue detection.Video samples of the driver’s face were recorded in the test vehicle.After training the classification model,the results showed that our method has high accuracy on driver fatigue detection in both daytime and nighttime.
基金the National Natural Science Foundation of China(No.61976098,61871434,61802136,61876178)Open Foundation of Key Laboratory of Security Prevention Technology and Risk Assessment,People’s Public Security University of China(No.18AFKF11)+1 种基金Science and Technology Bureau of Quanzhou(No.2018C115R)the Subsidized Project for Postgraduates’Innovative Fund in Scientific Research of Huaqiao University(No.18014084008).
文摘Fabric retrieval is very challenging since problems like viewpoint variations,illumination changes,blots,and poor image qualities are usually encountered in fabric images.In this work,a novel deep feature nonlinear fusion network(DFNFN)is proposed to nonlinearly fuse features learned from RGB and texture images for improving fabric retrieval.Texture images are obtained by using local binary pattern texture(LBP-Texture)features to describe RGB fabric images.The DFNFN firstly applies two feature learning branches to deal with RGB images and the corresponding LBP-Texture images simultaneously.Each branch contains the same convolutional neural network(CNN)architecture but independently learning parameters.Then,a nonlinear fusion module(NFM)is designed to concatenate the features produced by the two branches and nonlinearly fuse the concatenated features via a convolutional layer followed with a rectified linear unit(ReLU).The NFM is flexible since it can be embedded in different depths of the DFNFN to find the best fusion position.Consequently,DFNFN can optimally fuse features learned from RGB and LBP-Texture images to boost the retrieval accuracy.Extensive experiments on the Fabric 1.0 dataset show that the proposed method is superior to many state-of-the-art methods.
基金This work was supported by the High-Tech Industry Science and Technology Innovation Leading Plan Project of Hunan Provincial under Grant 2020GK2026,author B.Y,http://kjt.hunan.gov.cn/.
文摘Regular inspection of bridge cracks is crucial to bridge maintenance and repair.The traditional manual crack detection methods are timeconsuming,dangerous and subjective.At the same time,for the existing mainstream vision-based automatic crack detection algorithms,it is challenging to detect fine cracks and balance the detection accuracy and speed.Therefore,this paper proposes a new bridge crack segmentationmethod based on parallel attention mechanism and multi-scale features fusion on top of the DeeplabV3+network framework.First,the improved lightweight MobileNetv2 network and dilated separable convolution are integrated into the original DeeplabV3+network to improve the original backbone network Xception and atrous spatial pyramid pooling(ASPP)module,respectively,dramatically reducing the number of parameters in the network and accelerates the training and prediction speed of the model.Moreover,we introduce the parallel attention mechanism into the encoding and decoding stages.The attention to the crack regions can be enhanced from the aspects of both channel and spatial parts and significantly suppress the interference of various noises.Finally,we further improve the detection performance of the model for fine cracks by introducing a multi-scale features fusion module.Our research results are validated on the self-made dataset.The experiments show that our method is more accurate than other methods.Its intersection of union(IoU)and F1-score(F1)are increased to 77.96%and 87.57%,respectively.In addition,the number of parameters is only 4.10M,which is much smaller than the original network;also,the frames per second(FPS)is increased to 15 frames/s.The results prove that the proposed method fits well the requirements of rapid and accurate detection of bridge cracks and is superior to other methods.
基金This research received funding from Duhok Polytechnic University.
文摘Collective improvement in the acceptable or desirable accuracy level of breast cancer image-related pattern recognition using various schemes remains challenging.Despite the combination of multiple schemes to achieve superior ultrasound image pattern recognition by reducing the speckle noise,an enhanced technique is not achieved.The purpose of this study is to introduce a features-based fusion scheme based on enhancement uniform-Local Binary Pattern(LBP)and filtered noise reduction.To surmount the above limitations and achieve the aim of the study,a new descriptor that enhances the LBP features based on the new threshold has been proposed.This paper proposes a multi-level fusion scheme for the auto-classification of the static ultrasound images of breast cancer,which was attained in two stages.First,several images were generated from a single image using the pre-processing method.Themedian andWiener filterswere utilized to lessen the speckle noise and enhance the ultrasound image texture.This strategy allowed the extraction of a powerful feature by reducing the overlap between the benign and malignant image classes.Second,the fusion mechanism allowed the production of diverse features from different filtered images.The feasibility of using the LBP-based texture feature to categorize the ultrasound images was demonstrated.The effectiveness of the proposed scheme is tested on 250 ultrasound images comprising 100 and 150 benign and malignant images,respectively.The proposed method achieved very high accuracy(98%),sensitivity(98%),and specificity(99%).As a result,the fusion process that can help achieve a powerful decision based on different features produced from different filtered images improved the results of the new descriptor of LBP features in terms of accuracy,sensitivity,and specificity.
基金supported by the National Natural Science Foundation of China(62276092,62303167)the Postdoctoral Fellowship Program(Grade C)of China Postdoctoral Science Foundation(GZC20230707)+3 种基金the Key Science and Technology Program of Henan Province,China(242102211051,242102211042,212102310084)Key Scientiffc Research Projects of Colleges and Universities in Henan Province,China(25A520009)the China Postdoctoral Science Foundation(2024M760808)the Henan Province medical science and technology research plan joint construction project(LHGJ2024069).
文摘Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely used in pulmonary disease diagnosis,such as pneumonia and tuberculosis.However,traditional feature fusion methods often suffer from feature disparity,information loss,redundancy,and increased complexity,hindering the further extension of DL algorithms.To solve this problem,we propose a Graph-Convolution Fusion Network with Self-Supervised Feature Alignment(Self-FAGCFN)to address the limitations of traditional feature fusion methods in deep learning-based medical image classification for respiratory diseases such as pneumonia and tuberculosis.The network integrates Convolutional Neural Networks(CNNs)for robust feature extraction from two-dimensional grid structures and Graph Convolutional Networks(GCNs)within a Graph Neural Network branch to capture features based on graph structure,focusing on significant node representations.Additionally,an Attention-Embedding Ensemble Block is included to capture critical features from GCN outputs.To ensure effective feature alignment between pre-and post-fusion stages,we introduce a feature alignment loss that minimizes disparities.Moreover,to address the limitations of proposed methods,such as inappropriate centroid discrepancies during feature alignment and class imbalance in the dataset,we develop a Feature-Centroid Fusion(FCF)strategy and a Multi-Level Feature-Centroid Update(MLFCU)algorithm,respectively.Extensive experiments on public datasets LungVision and Chest-Xray demonstrate that the Self-FAGCFN model significantly outperforms existing methods in diagnosing pneumonia and tuberculosis,highlighting its potential for practical medical applications.
基金supported by the National Natural Science Foundation of China(No.62376287)the International Science and Technology Innovation Joint Base of Machine Vision and Medical Image Processing in Hunan Province(2021CB1013)the Natural Science Foundation of Hunan Province(Nos.2022JJ30762,2023JJ70016).
文摘Globally,diabetic retinopathy(DR)is the primary cause of blindness,affecting millions of people worldwide.This widespread impact underscores the critical need for reliable and precise diagnostic techniques to ensure prompt diagnosis and effective treatment.Deep learning-based automated diagnosis for diabetic retinopathy can facilitate early detection and treatment.However,traditional deep learning models that focus on local views often learn feature representations that are less discriminative at the semantic level.On the other hand,models that focus on global semantic-level information might overlook critical,subtle local pathological features.To address this issue,we propose an adaptive multi-scale feature fusion network called(AMSFuse),which can adaptively combine multi-scale global and local features without compromising their individual representation.Specifically,our model incorporates global features for extracting high-level contextual information from retinal images.Concurrently,local features capture fine-grained details,such as microaneurysms,hemorrhages,and exudates,which are critical for DR diagnosis.These global and local features are adaptively fused using a fusion block,followed by an Integrated Attention Mechanism(IAM)that refines the fused features by emphasizing relevant regions,thereby enhancing classification accuracy for DR classification.Our model achieves 86.3%accuracy on the APTOS dataset and 96.6%RFMiD,both of which are comparable to state-of-the-art methods.
基金supported by the National Natural Science Foundation of China(No.U2037602)。
文摘In order to address the challenges encountered in visual navigation for asteroid landing using traditional point features,such as significant recognition and extraction errors,low computational efficiency,and limited navigation accuracy,a novel approach for multi-type fusion visual navigation is proposed.This method aims to overcome the limitations of single-type features and enhance navigation accuracy.Analytical criteria for selecting multi-type features are introduced,which simultaneously improve computational efficiency and system navigation accuracy.Concerning pose estimation,both absolute and relative pose estimation methods based on multi-type feature fusion are proposed,and multi-type feature normalization is established,which significantly improves system navigation accuracy and lays the groundwork for flexible application of joint absolute-relative estimation.The feasibility and effectiveness of the proposed method are validated through simulation experiments through 4769 Castalia.
文摘Multi-Object Tracking(MOT)represents a fundamental but computationally demanding task in computer vision,with particular challenges arising in occluded and densely populated environments.While contemporary tracking systems have demonstrated considerable progress,persistent limitations—notably frequent occlusion-induced identity switches and tracking inaccuracies—continue to impede reliable real-world deployment.This work introduces an advanced tracking framework that enhances association robustness through a two-stage matching paradigm combining spatial and appearance features.Proposed framework employs:(1)a Height Modulated and Scale Adaptive Spatial Intersection-over-Union(HMSIoU)metric for improved spatial correspondence estimation across variable object scales and partial occlusions;(2)a feature extraction module generating discriminative appearance descriptors for identity maintenance;and(3)a recovery association mechanism for refining matches between unassociated tracks and detections.Comprehensive evaluation on standard MOT17 and MOT20 benchmarks demonstrates significant improvements in tracking consistency,with state-of-the-art performance across key metrics including HOTA(64),MOTA(80.7),IDF1(79.8),and IDs(1379).These results substantiate the efficacy of our Cue-Tracker framework in complex real-world scenarios characterized by occlusions and crowd interactions.
基金supported by the National Natural Science Foundation of China(62302167,62477013)Natural Science Foundation of Shanghai(No.24ZR1456100)+1 种基金Science and Technology Commission of Shanghai Municipality(No.24DZ2305900)the Shanghai Municipal Special Fund for Promoting High-Quality Development of Industries(2211106).
文摘Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.