The initial noise present in the depth images obtained with RGB-D sensors is a combination of hardware limitations in addition to the environmental factors,due to the limited capabilities of sensors,which also produce...The initial noise present in the depth images obtained with RGB-D sensors is a combination of hardware limitations in addition to the environmental factors,due to the limited capabilities of sensors,which also produce poor computer vision results.The common image denoising techniques tend to remove significant image details and also remove noise,provided they are based on space and frequency filtering.The updated framework presented in this paper is a novel denoising model that makes use of Boruta-driven feature selection using a Long Short-Term Memory Autoencoder(LSTMAE).The Boruta algorithm identifies the most useful depth features that are used to maximize the spatial structure integrity and reduce redundancy.An LSTMAE is then used to process these selected features and model depth pixel sequences to generate robust,noise-resistant representations.The system uses the encoder to encode the input data into a latent space that has been compressed before it is decoded to retrieve the clean image.Experiments on a benchmark data set show that the suggested technique attains a PSNR of 45 dB and an SSIM of 0.90,which is 10 dB higher than the performance of conventional convolutional autoencoders and 15 times higher than that of the wavelet-based models.Moreover,the feature selection step will decrease the input dimensionality by 40%,resulting in a 37.5%reduction in training time and a real-time inference rate of 200 FPS.Boruta-LSTMAE framework,therefore,offers a highly efficient and scalable system for depth image denoising,with a high potential to be applied to close-range 3D systems,such as robotic manipulation and gesture-based interfaces.展开更多
Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify sp...Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.展开更多
The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method f...The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.展开更多
The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships ...The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships between pixels requires further improvement.Previous methods face challenges in efficiently managing multi-scale fea-tures of different granularities from the encoder backbone,leaving room for improvement in their global representation and feature extraction capabilities.To address these challenges,we propose a novel Decoder with Multi-Head Feature Receptors(DMHFR),which receives multi-scale features from the encoder backbone and organizes them into three feature groups with different granularities:coarse,fine-grained,and full set.These groups are subsequently processed by Multi-Head Feature Receptors(MHFRs)after feature capture and modeling operations.MHFRs include two Three-Head Feature Receptors(THFRs)and one Four-Head Feature Receptor(FHFR).Each group of features is passed through these MHFRs and then fed into axial transformers,which help the model capture long-range dependencies within the features.The three MHFRs produce three distinct feature outputs.The output from the FHFR serves as auxiliary auxiliary features in the prediction head,and the prediction output and their losses will eventually be aggregated.Experimental results show that the Transformer using DMHFR outperforms 15 state of the arts(SOTA)methods on five public datasets.Specifically,it achieved significant improvements in mean DICE scores over the classic Parallel Reverse Attention Network(PraNet)method,with gains of 4.1%,2.2%,1.4%,8.9%,and 16.3%on the CVC-ClinicDB,Kvasir-SEG,CVC-T,CVC-ColonDB,and ETIS-LaribPolypDB datasets,respectively.展开更多
Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vi...Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vision, attracting the attention of many researchers. However, most HSI SR methods focus on the tradeoff between spatial resolution and spectral information, and cannot guarantee the efficient extraction of image information. In this paper, a multidimensional features network(MFNet) for HSI SR is proposed, which simultaneously learns and fuses the spatial,spectral, and frequency multidimensional features of HSI. Spatial features contain rich local details,spectral features contain the information and correlation between spectral bands, and frequency feature can reflect the global information of the image and can be used to obtain the global context of HSI. The fusion of the three features can better guide image super-resolution, to obtain higher-quality high-resolution hyperspectral images. In MFNet, we use the frequency feature extraction module(FFEM) to extract the frequency feature. On this basis, a multidimensional features extraction module(MFEM) is designed to learn and fuse multidimensional features. In addition, experimental results on two public datasets demonstrate that MFNet achieves state-of-the-art performance.展开更多
Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feat...Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.展开更多
Lunar Laser Ranging has extremely high requirements for the pointing accuracy of the telescopes used.To improve its pointing accuracy and solve the problem of insufficiently accurate telescope pointing correction achi...Lunar Laser Ranging has extremely high requirements for the pointing accuracy of the telescopes used.To improve its pointing accuracy and solve the problem of insufficiently accurate telescope pointing correction achieved by tracking stars in the all-sky region,we propose a processing scheme to select larger-sized lunar craters near the Lunar Corner Cube Retroreflector as reference features for telescope pointing bias computation.Accurately determining the position of the craters in the images is crucial for calculating the pointing bias;therefore,we propose a method for accurately calculating the crater position based on lunar surface feature matching.This method uses matched feature points obtained from image feature matching,using a deep learning method to solve the image transformation matrix.The known position of a crater in a reference image is mapped using this matrix to calculate the crater position in the target image.We validate this method using craters near the Lunar Corner Cube Retroreflectors of Apollo 15 and Luna 17 and find that the calculated position of a crater on the target image falls on the center of the crater,even for image features with large distortion near the lunar limb.The maximum image matching error is approximately 1″,and the minimum is only 0.47″,which meets the pointing requirements of Lunar Laser Ranging.This method provides a new technical means for the high-precision pointing bias calculation of the Lunar Laser Ranging system.展开更多
Recent advances in convolution neural network (CNN) have fostered the progress in object recognition and semantic segmentation, which in turn has improved the performance of hyperspectral image (HSI) classification. N...Recent advances in convolution neural network (CNN) have fostered the progress in object recognition and semantic segmentation, which in turn has improved the performance of hyperspectral image (HSI) classification. Nevertheless, the difficulty of high dimensional feature extraction and the shortage of small training samples seriously hinder the future development of HSI classification. In this paper, we propose a novel algorithm for HSI classification based on three-dimensional (3D) CNN and a feature pyramid network (FPN), called 3D-FPN. The framework contains a principle component analysis, a feature extraction structure and a logistic regression. Specifically, the FPN built with 3D convolutions not only retains the advantages of 3D convolution to fully extract the spectral-spatial feature maps, but also concentrates on more detailed information and performs multi-scale feature fusion. This method avoids the excessive complexity of the model and is suitable for small sample hyperspectral classification with varying categories and spatial resolutions. In order to test the performance of our proposed 3D-FPN method, rigorous experimental analysis was performed on three public hyperspectral data sets and hyperspectral data of GF-5 satellite. Quantitative and qualitative results indicated that our proposed method attained the best performance among other current state-of-the-art end-to-end deep learning-based methods.展开更多
Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurat...Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurate classification.Enhancing the visibility of these elusive cell features helps train robust deep-learning models.However,the selection and application of image processing techniques for such enhancement have not been systematically explored in the research community.To address this challenge,we introduce Salient Features Guided Augmentation(SFGA),an approach that strategically integrates machine learning and image processing.SFGA utilizes machine learning algorithms to identify crucial features within cell images,subsequently mapping these features to appropriate image processing techniques to enhance training images.By emphasizing salient features and aligning them with corresponding image processing methods,SFGA is designed to enhance the discriminating power of deep learning models in cell classification tasks.Our research undertakes a series of experiments,each exploring the performance of different datasets and data enhancement techniques in classifying cell types,highlighting the significance of data quality and enhancement in mitigating overfitting and distinguishing cell characteristics.Specifically,SFGA focuses on identifying tumor cells from tissue for extranodal extension detection,with the SFGA-enhanced dataset showing notable advantages in accuracy.We conducted a preliminary study of five experiments,among which the accuracy of the pleomorphism experiment improved significantly from 50.81%to 95.15%.The accuracy of the other four experiments also increased,with improvements ranging from 3 to 43 percentage points.Our preliminary study shows the possibilities to enhance the diagnostic accuracy of deep learning models and proposes a systematic approach that could enhance cancer diagnosis,contributing as a first step in using SFGA in medical image enhancement.展开更多
The application of transformer networks and feature fusion models in medical image segmentation has aroused considerable attention within the academic circle.Nevertheless,two main obstacles persist:(1)the restrictions...The application of transformer networks and feature fusion models in medical image segmentation has aroused considerable attention within the academic circle.Nevertheless,two main obstacles persist:(1)the restrictions of the Transformer network in dealing with locally detailed features,and(2)the considerable loss of feature information in current feature fusion modules.To solve these issues,this study initially presents a refined feature extraction approach,employing a double-branch feature extraction network to capture complex multi-scale local and global information from images.Subsequently,we proposed a low-loss feature fusion method-Multi-branch Feature Fusion Enhancement Module(MFFEM),which realizes effective feature fusion with minimal loss.Simultaneously,the cross-layer cross-attention fusion module(CLCA)is adopted to further achieve adequate feature fusion by enhancing the interaction between encoders and decoders of various scales.Finally,the feasibility of our method was verified using the Synapse and ACDC datasets,demonstrating its competitiveness.The average DSC(%)was 83.62 and 91.99 respectively,and the average HD95(mm)was reduced to 19.55 and 1.15 respectively.展开更多
A wavelet-based local and global feature fusion network(LAGN)is proposed for low-light image enhancement,aiming to enhance image details and restore colors in dark areas.This study focuses on addressing three key issu...A wavelet-based local and global feature fusion network(LAGN)is proposed for low-light image enhancement,aiming to enhance image details and restore colors in dark areas.This study focuses on addressing three key issues in low-light image enhancement:Enhancing low-light images using LAGN to preserve image details and colors;extracting image edge information via wavelet transform to enhance image details;and extracting local and global features of images through convolutional neural networks and Transformer to improve image contrast.Comparisons with state-of-the-art methods on two datasets verify that LAGN achieves the best performance in terms of details,brightness,and contrast.展开更多
When detecting objects in Unmanned Aerial Vehicle(UAV)taken images,large number of objects and high proportion of small objects bring huge challenges for detection algorithms based on the You Only Look Once(YOLO)frame...When detecting objects in Unmanned Aerial Vehicle(UAV)taken images,large number of objects and high proportion of small objects bring huge challenges for detection algorithms based on the You Only Look Once(YOLO)framework,rendering them challenging to deal with tasks that demand high precision.To address these problems,this paper proposes a high-precision object detection algorithm based on YOLOv10s.Firstly,a Multi-branch Enhancement Coordinate Attention(MECA)module is proposed to enhance feature extraction capability.Secondly,a Multilayer Feature Reconstruction(MFR)mechanism is designed to fully exploit multilayer features,which can enrich object information as well as remove redundant information.Finally,an MFR Path Aggregation Network(MFR-Neck)is constructed,which integrates multi-scale features to improve the network's ability to perceive objects of var-ying sizes.The experimental results demonstrate that the proposed algorithm increases the average detection accuracy by 14.15%on the Vis Drone dataset compared to YOLOv10s,effectively enhancing object detection precision in UAV-taken images.展开更多
Generative image steganography is a technique that directly generates stego images from secret infor-mation.Unlike traditional methods,it theoretically resists steganalysis because there is no cover image.Currently,th...Generative image steganography is a technique that directly generates stego images from secret infor-mation.Unlike traditional methods,it theoretically resists steganalysis because there is no cover image.Currently,the existing generative image steganography methods generally have good steganography performance,but there is still potential room for enhancing both the quality of stego images and the accuracy of secret information extraction.Therefore,this paper proposes a generative image steganography algorithm based on attribute feature transformation and invertible mapping rule.Firstly,the reference image is disentangled by a content and an attribute encoder to obtain content features and attribute features,respectively.Then,a mean mapping rule is introduced to map the binary secret information into a noise vector,conforming to the distribution of attribute features.This noise vector is input into the generator to produce the attribute transformed stego image with the content feature of the reference image.Additionally,we design an adversarial loss,a reconstruction loss,and an image diversity loss to train the proposed model.Experimental results demonstrate that the stego images generated by the proposed method are of high quality,with an average extraction accuracy of 99.4%for the hidden information.Furthermore,since the stego image has a uniform distribution similar to the attribute-transformed image without secret information,it effectively resists both subjective and objective steganalysis.展开更多
Nonlinear transforms have significantly advanced learned image compression(LIC),particularly using residual blocks.This transform enhances the nonlinear expression ability and obtain compact feature representation by ...Nonlinear transforms have significantly advanced learned image compression(LIC),particularly using residual blocks.This transform enhances the nonlinear expression ability and obtain compact feature representation by enlarging the receptive field,which indicates how the convolution process extracts features in a high dimensional feature space.However,its functionality is restricted to the spatial dimension and network depth,limiting further improvements in network performance due to insufficient information interaction and representation.Crucially,the potential of high dimensional feature space in the channel dimension and the exploration of network width/resolution remain largely untapped.In this paper,we consider nonlinear transforms from the perspective of feature space,defining high-dimensional feature spaces in different dimensions and investigating the specific effects.Firstly,we introduce the dimension increasing and decreasing transforms in both channel and spatial dimensions to obtain high dimensional feature space and achieve better feature extraction.Secondly,we design a channel-spatial fusion residual transform(CSR),which incorporates multi-dimensional transforms for a more effective representation.Furthermore,we simplify the proposed fusion transform to obtain a slim architecture(CSR-sm),balancing network complexity and compression performance.Finally,we build the overall network with stacked CSR transforms to achieve better compression and reconstruction.Experimental results demonstrate that the proposed method can achieve superior ratedistortion performance compared to the existing LIC methods and traditional codecs.Specifically,our proposed method achieves 9.38%BD-rate reduction over VVC on Kodak dataset.展开更多
This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 bac...This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 backbone network,followed by adaptive average pooling to scale the features to a fixed length.Subsequently,product quantization with residuals(PQR)is applied to convert continuous feature vectors into discrete features representations,preserving essential information insensitive to image quality variations.The quantized and original features are concatenated and fed into a self-attention mechanism to capture keratitis-related features.Finally,these enhanced features are classified through a fully connected layer.Experiments on clinical low-quality(LQ)images show that ADK_FVQSAM achieves accuracies of 87.7%,81.9%,and 89.3% for keratitis,other corneal abnormalities,and normal corneas,respectively.Compared to DenseNet121,Swin transformer,and InceptionResNet,ADK_FVQSAM improves average accuracy by 3.1%,11.3%,and 15.3%,respectively.These results demonstrate that ADK_FVQSAM significantly enhances the recognition performance of keratitis based on LQ slit-lamp images,offering a practical approach for clinical application.展开更多
Imaging hyperspectral technology has distinctive advantages of non-destructive and non-contact measurement,and the integration of spectral and spatial data.These characteristics present new methodologies for intellige...Imaging hyperspectral technology has distinctive advantages of non-destructive and non-contact measurement,and the integration of spectral and spatial data.These characteristics present new methodologies for intelligent geological sensing in tunnels and other underground engineering projects.However,the in situ acquisition and rapid classification of hyperspectral images in underground still faces great challenges,including the difficulty in obtaining uniform hyperspectral images and the complexity of deploying sophisticated models on mobile platforms.This study proposes an intelligent lithology identification method based on partition feature extraction of hyperspectral images.Firstly,pixel-level hyperspectral information from representative lithological regions is extracted and fused to obtain rock hyperspectral image partition features.Subsequently,an SG-SNV-PCA-DNN(SSPD)model specifically designed for optimizing rock hyperspectral data,performing spectral dimensionality reduction,and identifying lithology is integrated.In an experimental study involving 3420 hyperspectral images,the SSPD identification model achieved the highest accuracy in the testing set,reaching 98.77%.Moreover,the speed of the SSPD model was found to be 18.5%faster than that of the unprocessed model,with an accuracy improvement of 5.22%.In contrast,the ResNet-101 model,used for point-by-point identification based on non-partitioned features,achieved a maximum accuracy of 97.86%in the testing set.In addition,the partition feature extraction methods significantly reduce computational complexity.An objective evaluation of various models demonstrated that the SSPD model exhibited superior performance,achieving a precision(P)of 99.46%,a recall(R)of 99.44%,and F1 score(F1)of 99.45%.Additionally,a pioneering in situ detection work was carried out in a tunnel using underground hyperspectral imaging technology.展开更多
To realize high-precision automatic measurement of two-dimensional geometric features on parts, a cooperative measurement system based on machine vision is constructed. Its hardware structure, functional composition a...To realize high-precision automatic measurement of two-dimensional geometric features on parts, a cooperative measurement system based on machine vision is constructed. Its hardware structure, functional composition and working principle are introduced. The mapping relationship between the feature image coordinates and the measuring space coordinates is established. The method of measuring path planning of small field of view (FOV) images is proposed. With the cooperation of the panoramic image of the object to be measured, the small FOV images with high object plane resolution are acquired automatically. Then, the auxiliary measuring characteristics are constructed and the parameters of the features to be measured are automatically extracted. Experimental results show that the absolute value of relative error is less than 0. 03% when applying the cooperative measurement system to gauge the hole distance of 100 mm nominal size. When the object plane resolving power of the small FOV images is 16 times that of the large FOV image, the measurement accuracy of small FOV images is improved by 14 times compared with the large FOV image. It is suitable for high-precision automatic measurement of two-dimensional complex geometric features distributed on large scale parts.展开更多
An improved estimation of motion vectors of feature points is proposed for tracking moving objects of dynamic image sequence. Feature points are firstly extracted by the improved minimum intensity change (MIC) algor...An improved estimation of motion vectors of feature points is proposed for tracking moving objects of dynamic image sequence. Feature points are firstly extracted by the improved minimum intensity change (MIC) algorithm. The matching points of these feature points are then determined by adaptive rood pattern searching. Based on the random sample consensus (RANSAC) method, the background motion is finally compensated by the parameters of an affine transform of the background motion. With reasonable morphological filtering, the moving objects are completely extracted from the background, and then tracked accurately. Experimental results show that the improved method is successful on the motion background compensation and offers great promise in tracking moving objects of the dynamic image sequence.展开更多
An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. The determination of effective color features depen...An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. The determination of effective color features depends on the analysis of various color features from each tested color image via the designed feature encoding. It is different from the pervious methods where self organized feature map (SOFM) is used for constructing the feature encoding so that the feature encoding can self organize the effective features for different color images. Fuzzy clustering is applied for the final segmentation when the well suited color features and the initial parameter are available. The proposed method has been applied in segmenting different types of color images and the experimental results show that it outperforms the classical clustering method. The study shows that the feature encoding approach offers great promise in automating and optimizing the segmentation of color images.展开更多
Roadbed disease detection is essential for maintaining road functionality.Ground penetrating radar(GPR)enables non-destructive detection without drilling.However,current identification often relies on manual inspectio...Roadbed disease detection is essential for maintaining road functionality.Ground penetrating radar(GPR)enables non-destructive detection without drilling.However,current identification often relies on manual inspection,which requires extensive experience,suffers from low efficiency,and is highly subjective.As the results are presented as radar images,image processing methods can be applied for fast and objective identification.Deep learning-based approaches now offer a robust solution for automated roadbed disease detection.This study proposes an enhanced Faster Region-based Convolutional Neural Networks(R-CNN)framework integrating ResNet-50 as the backbone and two-dimensional discrete Fourier spectrum transformation(2D-DFT)for frequency-domain feature fusion.A dedicated GPR image dataset comprising 1650 annotated images was constructed and augmented to 6600 images via median filtering,histogram equalization,and binarization.The proposed model segments defect regions,applies binary masking,and fuses frequency-domain features to improve small-target detection under noisy backgrounds.Experimental results show that the improved Faster R-CNN achieves a mean Average Precision(mAP)of 0.92,representing a 0.22 increase over the baseline.Precision improved by 26%while recall remained stable at 87%.The model was further validated on real urban road data,demonstrating robust detection capability even under interference.These findings highlight the potential of combining GPR with deep learning for efficient,non-destructive roadbed health monitoring.展开更多
文摘The initial noise present in the depth images obtained with RGB-D sensors is a combination of hardware limitations in addition to the environmental factors,due to the limited capabilities of sensors,which also produce poor computer vision results.The common image denoising techniques tend to remove significant image details and also remove noise,provided they are based on space and frequency filtering.The updated framework presented in this paper is a novel denoising model that makes use of Boruta-driven feature selection using a Long Short-Term Memory Autoencoder(LSTMAE).The Boruta algorithm identifies the most useful depth features that are used to maximize the spatial structure integrity and reduce redundancy.An LSTMAE is then used to process these selected features and model depth pixel sequences to generate robust,noise-resistant representations.The system uses the encoder to encode the input data into a latent space that has been compressed before it is decoded to retrieve the clean image.Experiments on a benchmark data set show that the suggested technique attains a PSNR of 45 dB and an SSIM of 0.90,which is 10 dB higher than the performance of conventional convolutional autoencoders and 15 times higher than that of the wavelet-based models.Moreover,the feature selection step will decrease the input dimensionality by 40%,resulting in a 37.5%reduction in training time and a real-time inference rate of 200 FPS.Boruta-LSTMAE framework,therefore,offers a highly efficient and scalable system for depth image denoising,with a high potential to be applied to close-range 3D systems,such as robotic manipulation and gesture-based interfaces.
基金the Deanship of Scientifc Research at King Khalid University for funding this work through large group Research Project under grant number RGP2/421/45supported via funding from Prince Sattam bin Abdulaziz University project number(PSAU/2024/R/1446)+1 种基金supported by theResearchers Supporting Project Number(UM-DSR-IG-2023-07)Almaarefa University,Riyadh,Saudi Arabia.supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408).
文摘Machine learning(ML)is increasingly applied for medical image processing with appropriate learning paradigms.These applications include analyzing images of various organs,such as the brain,lung,eye,etc.,to identify specific flaws/diseases for diagnosis.The primary concern of ML applications is the precise selection of flexible image features for pattern detection and region classification.Most of the extracted image features are irrelevant and lead to an increase in computation time.Therefore,this article uses an analytical learning paradigm to design a Congruent Feature Selection Method to select the most relevant image features.This process trains the learning paradigm using similarity and correlation-based features over different textural intensities and pixel distributions.The similarity between the pixels over the various distribution patterns with high indexes is recommended for disease diagnosis.Later,the correlation based on intensity and distribution is analyzed to improve the feature selection congruency.Therefore,the more congruent pixels are sorted in the descending order of the selection,which identifies better regions than the distribution.Now,the learning paradigm is trained using intensity and region-based similarity to maximize the chances of selection.Therefore,the probability of feature selection,regardless of the textures and medical image patterns,is improved.This process enhances the performance of ML applications for different medical image processing.The proposed method improves the accuracy,precision,and training rate by 13.19%,10.69%,and 11.06%,respectively,compared to other models for the selected dataset.The mean error and selection time is also reduced by 12.56%and 13.56%,respectively,compared to the same models and dataset.
基金Supported by the Henan Province Key Research and Development Project(231111211300)the Central Government of Henan Province Guides Local Science and Technology Development Funds(Z20231811005)+2 种基金Henan Province Key Research and Development Project(231111110100)Henan Provincial Outstanding Foreign Scientist Studio(GZS2024006)Henan Provincial Joint Fund for Scientific and Technological Research and Development Plan(Application and Overcoming Technical Barriers)(242103810028)。
文摘The fusion of infrared and visible images should emphasize the salient targets in the infrared image while preserving the textural details of the visible images.To meet these requirements,an autoencoder-based method for infrared and visible image fusion is proposed.The encoder designed according to the optimization objective consists of a base encoder and a detail encoder,which is used to extract low-frequency and high-frequency information from the image.This extraction may lead to some information not being captured,so a compensation encoder is proposed to supplement the missing information.Multi-scale decomposition is also employed to extract image features more comprehensively.The decoder combines low-frequency,high-frequency and supplementary information to obtain multi-scale features.Subsequently,the attention strategy and fusion module are introduced to perform multi-scale fusion for image reconstruction.Experimental results on three datasets show that the fused images generated by this network effectively retain salient targets while being more consistent with human visual perception.
基金supported by Xiamen Medical and Health Guidance Project in 2021(No.3502Z20214ZD1070)supported by a grant from Guangxi Key Laboratory of Machine Vision and Intelligent Control,China(No.2023B02).
文摘The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships between pixels requires further improvement.Previous methods face challenges in efficiently managing multi-scale fea-tures of different granularities from the encoder backbone,leaving room for improvement in their global representation and feature extraction capabilities.To address these challenges,we propose a novel Decoder with Multi-Head Feature Receptors(DMHFR),which receives multi-scale features from the encoder backbone and organizes them into three feature groups with different granularities:coarse,fine-grained,and full set.These groups are subsequently processed by Multi-Head Feature Receptors(MHFRs)after feature capture and modeling operations.MHFRs include two Three-Head Feature Receptors(THFRs)and one Four-Head Feature Receptor(FHFR).Each group of features is passed through these MHFRs and then fed into axial transformers,which help the model capture long-range dependencies within the features.The three MHFRs produce three distinct feature outputs.The output from the FHFR serves as auxiliary auxiliary features in the prediction head,and the prediction output and their losses will eventually be aggregated.Experimental results show that the Transformer using DMHFR outperforms 15 state of the arts(SOTA)methods on five public datasets.Specifically,it achieved significant improvements in mean DICE scores over the classic Parallel Reverse Attention Network(PraNet)method,with gains of 4.1%,2.2%,1.4%,8.9%,and 16.3%on the CVC-ClinicDB,Kvasir-SEG,CVC-T,CVC-ColonDB,and ETIS-LaribPolypDB datasets,respectively.
基金supported by the Fundamental Research Funds for the Provincial Universities of Zhejiang (No.GK249909299001-036)National Key Research and Development Program of China (No. 2023YFB4502803)Zhejiang Provincial Natural Science Foundation of China (No.LDT23F01014F01)。
文摘Due to the limitations of existing imaging hardware, obtaining high-resolution hyperspectral images is challenging. Hyperspectral image super-resolution(HSI SR) has been a very attractive research topic in computer vision, attracting the attention of many researchers. However, most HSI SR methods focus on the tradeoff between spatial resolution and spectral information, and cannot guarantee the efficient extraction of image information. In this paper, a multidimensional features network(MFNet) for HSI SR is proposed, which simultaneously learns and fuses the spatial,spectral, and frequency multidimensional features of HSI. Spatial features contain rich local details,spectral features contain the information and correlation between spectral bands, and frequency feature can reflect the global information of the image and can be used to obtain the global context of HSI. The fusion of the three features can better guide image super-resolution, to obtain higher-quality high-resolution hyperspectral images. In MFNet, we use the frequency feature extraction module(FFEM) to extract the frequency feature. On this basis, a multidimensional features extraction module(MFEM) is designed to learn and fuse multidimensional features. In addition, experimental results on two public datasets demonstrate that MFNet achieves state-of-the-art performance.
基金supported by the National Natural Science Foundation of China(62302167,62477013)Natural Science Foundation of Shanghai(No.24ZR1456100)+1 种基金Science and Technology Commission of Shanghai Municipality(No.24DZ2305900)the Shanghai Municipal Special Fund for Promoting High-Quality Development of Industries(2211106).
文摘Multi-label image classification is a challenging task due to the diverse sizes and complex backgrounds of objects in images.Obtaining class-specific precise representations at different scales is a key aspect of feature representation.However,existing methods often rely on the single-scale deep feature,neglecting shallow and deeper layer features,which poses challenges when predicting objects of varying scales within the same image.Although some studies have explored multi-scale features,they rarely address the flow of information between scales or efficiently obtain class-specific precise representations for features at different scales.To address these issues,we propose a two-stage,three-branch Transformer-based framework.The first stage incorporates multi-scale image feature extraction and hierarchical scale attention.This design enables the model to consider objects at various scales while enhancing the flow of information across different feature scales,improving the model’s generalization to diverse object scales.The second stage includes a global feature enhancement module and a region selection module.The global feature enhancement module strengthens interconnections between different image regions,mitigating the issue of incomplete represen-tations,while the region selection module models the cross-modal relationships between image features and labels.Together,these components enable the efficient acquisition of class-specific precise feature representations.Extensive experiments on public datasets,including COCO2014,VOC2007,and VOC2012,demonstrate the effectiveness of our proposed method.Our approach achieves consistent performance gains of 0.3%,0.4%,and 0.2%over state-of-the-art methods on the three datasets,respectively.These results validate the reliability and superiority of our approach for multi-label image classification.
基金funded by Natural Science Foundation of Jilin Province(20220101125JC)the National Natural Science Foundation of China(12273079).
文摘Lunar Laser Ranging has extremely high requirements for the pointing accuracy of the telescopes used.To improve its pointing accuracy and solve the problem of insufficiently accurate telescope pointing correction achieved by tracking stars in the all-sky region,we propose a processing scheme to select larger-sized lunar craters near the Lunar Corner Cube Retroreflector as reference features for telescope pointing bias computation.Accurately determining the position of the craters in the images is crucial for calculating the pointing bias;therefore,we propose a method for accurately calculating the crater position based on lunar surface feature matching.This method uses matched feature points obtained from image feature matching,using a deep learning method to solve the image transformation matrix.The known position of a crater in a reference image is mapped using this matrix to calculate the crater position in the target image.We validate this method using craters near the Lunar Corner Cube Retroreflectors of Apollo 15 and Luna 17 and find that the calculated position of a crater on the target image falls on the center of the crater,even for image features with large distortion near the lunar limb.The maximum image matching error is approximately 1″,and the minimum is only 0.47″,which meets the pointing requirements of Lunar Laser Ranging.This method provides a new technical means for the high-precision pointing bias calculation of the Lunar Laser Ranging system.
基金the National Natural Science Foundation of China(No.51975374)。
文摘Recent advances in convolution neural network (CNN) have fostered the progress in object recognition and semantic segmentation, which in turn has improved the performance of hyperspectral image (HSI) classification. Nevertheless, the difficulty of high dimensional feature extraction and the shortage of small training samples seriously hinder the future development of HSI classification. In this paper, we propose a novel algorithm for HSI classification based on three-dimensional (3D) CNN and a feature pyramid network (FPN), called 3D-FPN. The framework contains a principle component analysis, a feature extraction structure and a logistic regression. Specifically, the FPN built with 3D convolutions not only retains the advantages of 3D convolution to fully extract the spectral-spatial feature maps, but also concentrates on more detailed information and performs multi-scale feature fusion. This method avoids the excessive complexity of the model and is suitable for small sample hyperspectral classification with varying categories and spatial resolutions. In order to test the performance of our proposed 3D-FPN method, rigorous experimental analysis was performed on three public hyperspectral data sets and hyperspectral data of GF-5 satellite. Quantitative and qualitative results indicated that our proposed method attained the best performance among other current state-of-the-art end-to-end deep learning-based methods.
基金supported by grants fromthe North China University of Technology Research Start-Up Fund(11005136024XN147-14)and(110051360024XN151-97)Guangzhou Development Zone Science and Technology Project(2023GH02)+4 种基金the National Key R&D Program of China(2021YFE0201100 and 2022YFA1103401 to Juntao Gao)National Natural Science Foundation of China(981890991 to Juntao Gao)Beijing Municipal Natural Science Foundation(Z200021 to Juntao Gao)CAS Interdisciplinary Innovation Team(JCTD-2020-04 to Juntao Gao)0032/2022/A,by Macao FDCT,and MYRG2022-00271-FST.
文摘Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurate classification.Enhancing the visibility of these elusive cell features helps train robust deep-learning models.However,the selection and application of image processing techniques for such enhancement have not been systematically explored in the research community.To address this challenge,we introduce Salient Features Guided Augmentation(SFGA),an approach that strategically integrates machine learning and image processing.SFGA utilizes machine learning algorithms to identify crucial features within cell images,subsequently mapping these features to appropriate image processing techniques to enhance training images.By emphasizing salient features and aligning them with corresponding image processing methods,SFGA is designed to enhance the discriminating power of deep learning models in cell classification tasks.Our research undertakes a series of experiments,each exploring the performance of different datasets and data enhancement techniques in classifying cell types,highlighting the significance of data quality and enhancement in mitigating overfitting and distinguishing cell characteristics.Specifically,SFGA focuses on identifying tumor cells from tissue for extranodal extension detection,with the SFGA-enhanced dataset showing notable advantages in accuracy.We conducted a preliminary study of five experiments,among which the accuracy of the pleomorphism experiment improved significantly from 50.81%to 95.15%.The accuracy of the other four experiments also increased,with improvements ranging from 3 to 43 percentage points.Our preliminary study shows the possibilities to enhance the diagnostic accuracy of deep learning models and proposes a systematic approach that could enhance cancer diagnosis,contributing as a first step in using SFGA in medical image enhancement.
基金funded by the Henan Science and Technology research project(222103810042)Support by the open project of scientific research platform of grain information processing center of Henan University of Technology(KFJJ-2021-108)+1 种基金Support by the innovative funds plan of Henan University of Technology(2021ZKCJ14)Henan University of Technology Youth Backbone Teacher Program.
文摘The application of transformer networks and feature fusion models in medical image segmentation has aroused considerable attention within the academic circle.Nevertheless,two main obstacles persist:(1)the restrictions of the Transformer network in dealing with locally detailed features,and(2)the considerable loss of feature information in current feature fusion modules.To solve these issues,this study initially presents a refined feature extraction approach,employing a double-branch feature extraction network to capture complex multi-scale local and global information from images.Subsequently,we proposed a low-loss feature fusion method-Multi-branch Feature Fusion Enhancement Module(MFFEM),which realizes effective feature fusion with minimal loss.Simultaneously,the cross-layer cross-attention fusion module(CLCA)is adopted to further achieve adequate feature fusion by enhancing the interaction between encoders and decoders of various scales.Finally,the feasibility of our method was verified using the Synapse and ACDC datasets,demonstrating its competitiveness.The average DSC(%)was 83.62 and 91.99 respectively,and the average HD95(mm)was reduced to 19.55 and 1.15 respectively.
文摘A wavelet-based local and global feature fusion network(LAGN)is proposed for low-light image enhancement,aiming to enhance image details and restore colors in dark areas.This study focuses on addressing three key issues in low-light image enhancement:Enhancing low-light images using LAGN to preserve image details and colors;extracting image edge information via wavelet transform to enhance image details;and extracting local and global features of images through convolutional neural networks and Transformer to improve image contrast.Comparisons with state-of-the-art methods on two datasets verify that LAGN achieves the best performance in terms of details,brightness,and contrast.
基金co-supported by the National Natural Science Foundation of China(No.62103190)the Natural Science Foundation of Jiangsu Province,China(No.BK20230923)。
文摘When detecting objects in Unmanned Aerial Vehicle(UAV)taken images,large number of objects and high proportion of small objects bring huge challenges for detection algorithms based on the You Only Look Once(YOLO)framework,rendering them challenging to deal with tasks that demand high precision.To address these problems,this paper proposes a high-precision object detection algorithm based on YOLOv10s.Firstly,a Multi-branch Enhancement Coordinate Attention(MECA)module is proposed to enhance feature extraction capability.Secondly,a Multilayer Feature Reconstruction(MFR)mechanism is designed to fully exploit multilayer features,which can enrich object information as well as remove redundant information.Finally,an MFR Path Aggregation Network(MFR-Neck)is constructed,which integrates multi-scale features to improve the network's ability to perceive objects of var-ying sizes.The experimental results demonstrate that the proposed algorithm increases the average detection accuracy by 14.15%on the Vis Drone dataset compared to YOLOv10s,effectively enhancing object detection precision in UAV-taken images.
基金supported in part by the National Natural Science Foundation of China(Nos.62202234,62401270)the China Postdoctoral Science Foundation(No.2023M741778)the Natural Science Foundation of Jiangsu Province(Nos.BK20240706,BK20240694).
文摘Generative image steganography is a technique that directly generates stego images from secret infor-mation.Unlike traditional methods,it theoretically resists steganalysis because there is no cover image.Currently,the existing generative image steganography methods generally have good steganography performance,but there is still potential room for enhancing both the quality of stego images and the accuracy of secret information extraction.Therefore,this paper proposes a generative image steganography algorithm based on attribute feature transformation and invertible mapping rule.Firstly,the reference image is disentangled by a content and an attribute encoder to obtain content features and attribute features,respectively.Then,a mean mapping rule is introduced to map the binary secret information into a noise vector,conforming to the distribution of attribute features.This noise vector is input into the generator to produce the attribute transformed stego image with the content feature of the reference image.Additionally,we design an adversarial loss,a reconstruction loss,and an image diversity loss to train the proposed model.Experimental results demonstrate that the stego images generated by the proposed method are of high quality,with an average extraction accuracy of 99.4%for the hidden information.Furthermore,since the stego image has a uniform distribution similar to the attribute-transformed image without secret information,it effectively resists both subjective and objective steganalysis.
基金supported by the Key Program of the National Natural Science Foundation of China(Grant No.62031013)Guangdong Province Key Construction Discipline Scientific Research Capacity Improvement Project(Grant No.2022ZDJS117).
文摘Nonlinear transforms have significantly advanced learned image compression(LIC),particularly using residual blocks.This transform enhances the nonlinear expression ability and obtain compact feature representation by enlarging the receptive field,which indicates how the convolution process extracts features in a high dimensional feature space.However,its functionality is restricted to the spatial dimension and network depth,limiting further improvements in network performance due to insufficient information interaction and representation.Crucially,the potential of high dimensional feature space in the channel dimension and the exploration of network width/resolution remain largely untapped.In this paper,we consider nonlinear transforms from the perspective of feature space,defining high-dimensional feature spaces in different dimensions and investigating the specific effects.Firstly,we introduce the dimension increasing and decreasing transforms in both channel and spatial dimensions to obtain high dimensional feature space and achieve better feature extraction.Secondly,we design a channel-spatial fusion residual transform(CSR),which incorporates multi-dimensional transforms for a more effective representation.Furthermore,we simplify the proposed fusion transform to obtain a slim architecture(CSR-sm),balancing network complexity and compression performance.Finally,we build the overall network with stacked CSR transforms to achieve better compression and reconstruction.Experimental results demonstrate that the proposed method can achieve superior ratedistortion performance compared to the existing LIC methods and traditional codecs.Specifically,our proposed method achieves 9.38%BD-rate reduction over VVC on Kodak dataset.
基金supported by the National Natural Science Foundation of China(Nos.62276210,82201148 and 62376215)the Key Research and Development Project of Shaanxi Province(No.2025CY-YBXM-044)+3 种基金the Natural Science Foundation of Zhejiang Province(No.LQ22H120002)the Medical Health Science and Technology Project of Zhejiang Province(Nos.2022RC069 and 2023KY1140)the Natural Science Foundation of Ningbo(No.2023J390)the Ningbo Top Medical and Health Research Program(No.2023030716).
文摘This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 backbone network,followed by adaptive average pooling to scale the features to a fixed length.Subsequently,product quantization with residuals(PQR)is applied to convert continuous feature vectors into discrete features representations,preserving essential information insensitive to image quality variations.The quantized and original features are concatenated and fed into a self-attention mechanism to capture keratitis-related features.Finally,these enhanced features are classified through a fully connected layer.Experiments on clinical low-quality(LQ)images show that ADK_FVQSAM achieves accuracies of 87.7%,81.9%,and 89.3% for keratitis,other corneal abnormalities,and normal corneas,respectively.Compared to DenseNet121,Swin transformer,and InceptionResNet,ADK_FVQSAM improves average accuracy by 3.1%,11.3%,and 15.3%,respectively.These results demonstrate that ADK_FVQSAM significantly enhances the recognition performance of keratitis based on LQ slit-lamp images,offering a practical approach for clinical application.
基金support from the National Natural Science Foundation of China(Grant Nos.52379103,52279103)the Natural Science Foundation of Shandong Province(Grant No.ZR2023YQ049).
文摘Imaging hyperspectral technology has distinctive advantages of non-destructive and non-contact measurement,and the integration of spectral and spatial data.These characteristics present new methodologies for intelligent geological sensing in tunnels and other underground engineering projects.However,the in situ acquisition and rapid classification of hyperspectral images in underground still faces great challenges,including the difficulty in obtaining uniform hyperspectral images and the complexity of deploying sophisticated models on mobile platforms.This study proposes an intelligent lithology identification method based on partition feature extraction of hyperspectral images.Firstly,pixel-level hyperspectral information from representative lithological regions is extracted and fused to obtain rock hyperspectral image partition features.Subsequently,an SG-SNV-PCA-DNN(SSPD)model specifically designed for optimizing rock hyperspectral data,performing spectral dimensionality reduction,and identifying lithology is integrated.In an experimental study involving 3420 hyperspectral images,the SSPD identification model achieved the highest accuracy in the testing set,reaching 98.77%.Moreover,the speed of the SSPD model was found to be 18.5%faster than that of the unprocessed model,with an accuracy improvement of 5.22%.In contrast,the ResNet-101 model,used for point-by-point identification based on non-partitioned features,achieved a maximum accuracy of 97.86%in the testing set.In addition,the partition feature extraction methods significantly reduce computational complexity.An objective evaluation of various models demonstrated that the SSPD model exhibited superior performance,achieving a precision(P)of 99.46%,a recall(R)of 99.44%,and F1 score(F1)of 99.45%.Additionally,a pioneering in situ detection work was carried out in a tunnel using underground hyperspectral imaging technology.
基金The National Natural Science Foundation of China(No.51175267)the Natural Science Foundation of Jiangsu Province(No.BK2010481)+2 种基金the Ph.D.Programs Foundation of Ministry of Education of China(No.20113219120004)China Postdoctoral Science Foundation(No.20100481148)the Postdoctoral Science Foundation of Jiangsu Province(No.1001004B)
文摘To realize high-precision automatic measurement of two-dimensional geometric features on parts, a cooperative measurement system based on machine vision is constructed. Its hardware structure, functional composition and working principle are introduced. The mapping relationship between the feature image coordinates and the measuring space coordinates is established. The method of measuring path planning of small field of view (FOV) images is proposed. With the cooperation of the panoramic image of the object to be measured, the small FOV images with high object plane resolution are acquired automatically. Then, the auxiliary measuring characteristics are constructed and the parameters of the features to be measured are automatically extracted. Experimental results show that the absolute value of relative error is less than 0. 03% when applying the cooperative measurement system to gauge the hole distance of 100 mm nominal size. When the object plane resolving power of the small FOV images is 16 times that of the large FOV image, the measurement accuracy of small FOV images is improved by 14 times compared with the large FOV image. It is suitable for high-precision automatic measurement of two-dimensional complex geometric features distributed on large scale parts.
文摘An improved estimation of motion vectors of feature points is proposed for tracking moving objects of dynamic image sequence. Feature points are firstly extracted by the improved minimum intensity change (MIC) algorithm. The matching points of these feature points are then determined by adaptive rood pattern searching. Based on the random sample consensus (RANSAC) method, the background motion is finally compensated by the parameters of an affine transform of the background motion. With reasonable morphological filtering, the moving objects are completely extracted from the background, and then tracked accurately. Experimental results show that the improved method is successful on the motion background compensation and offers great promise in tracking moving objects of the dynamic image sequence.
文摘An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. The determination of effective color features depends on the analysis of various color features from each tested color image via the designed feature encoding. It is different from the pervious methods where self organized feature map (SOFM) is used for constructing the feature encoding so that the feature encoding can self organize the effective features for different color images. Fuzzy clustering is applied for the final segmentation when the well suited color features and the initial parameter are available. The proposed method has been applied in segmenting different types of color images and the experimental results show that it outperforms the classical clustering method. The study shows that the feature encoding approach offers great promise in automating and optimizing the segmentation of color images.
基金supported by the Second Batch of Key Textbook Construction Projects of“14th Five-Year Plan”of Zhejiang Vocational Colleges(SZDJC-2412).
文摘Roadbed disease detection is essential for maintaining road functionality.Ground penetrating radar(GPR)enables non-destructive detection without drilling.However,current identification often relies on manual inspection,which requires extensive experience,suffers from low efficiency,and is highly subjective.As the results are presented as radar images,image processing methods can be applied for fast and objective identification.Deep learning-based approaches now offer a robust solution for automated roadbed disease detection.This study proposes an enhanced Faster Region-based Convolutional Neural Networks(R-CNN)framework integrating ResNet-50 as the backbone and two-dimensional discrete Fourier spectrum transformation(2D-DFT)for frequency-domain feature fusion.A dedicated GPR image dataset comprising 1650 annotated images was constructed and augmented to 6600 images via median filtering,histogram equalization,and binarization.The proposed model segments defect regions,applies binary masking,and fuses frequency-domain features to improve small-target detection under noisy backgrounds.Experimental results show that the improved Faster R-CNN achieves a mean Average Precision(mAP)of 0.92,representing a 0.22 increase over the baseline.Precision improved by 26%while recall remained stable at 87%.The model was further validated on real urban road data,demonstrating robust detection capability even under interference.These findings highlight the potential of combining GPR with deep learning for efficient,non-destructive roadbed health monitoring.