Gamma-ray imaging systems are powerful tools in radiographic diagnosis.However,the recorded images suffer from degradations such as noise,blurring,and downsampling,consequently failing to meet high-precision diagnosti...Gamma-ray imaging systems are powerful tools in radiographic diagnosis.However,the recorded images suffer from degradations such as noise,blurring,and downsampling,consequently failing to meet high-precision diagnostic requirements.In this paper,we propose a novel single-image super-resolution algorithm to enhance the spatial resolution of gamma-ray imaging systems.A mathematical model of the gamma-ray imaging system is established based on maximum a posteriori estimation.Within the plug-and-play framework,the half-quadratic splitting method is employed to decouple the data fidelit term and the regularization term.An image denoiser using convolutional neural networks is adopted as an implicit image prior,referred to as a deep denoiser prior,eliminating the need to explicitly design a regularization term.Furthermore,the impact of the image boundary condition on reconstruction results is considered,and a method for estimating image boundaries is introduced.The results show that the proposed algorithm can effectively addresses boundary artifacts.By increasing the pixel number of the reconstructed images,the proposed algorithm is capable of recovering more details.Notably,in both simulation and real experiments,the proposed algorithm is demonstrated to achieve subpixel resolution,surpassing the Nyquist sampling limit determined by the camera pixel size.展开更多
A new first-order optimality condition for the basis pursuit denoise (BPDN) problem is derived. This condition provides a new approach to choose the penalty param- eters adaptively for a fixed point iteration algori...A new first-order optimality condition for the basis pursuit denoise (BPDN) problem is derived. This condition provides a new approach to choose the penalty param- eters adaptively for a fixed point iteration algorithm. Meanwhile, the result is extended to matrix completion which is a new field on the heel of the compressed sensing. The numerical experiments of sparse vector recovery and low-rank matrix completion show validity of the theoretic results.展开更多
Existing deep learning-based point cloud denoising methods are generally trained in a supervised manner that requires clean data as ground-truth labels.However,in practice,it is not always feasible to obtain clean poi...Existing deep learning-based point cloud denoising methods are generally trained in a supervised manner that requires clean data as ground-truth labels.However,in practice,it is not always feasible to obtain clean point clouds.In this paper,we introduce a novel unsupervised point cloud denoising method that eliminates the need to use clean point clouds as groundtruth labels during training.We demonstrate that it is feasible for neural networks to only take noisy point clouds as input,and learn to approximate and restore their clean versions.In particular,we generate two noise levels for the original point clouds,requiring the second noise level to be twice the amount of the first noise level.With this,we can deduce the relationship between the displacement information that recovers the clean surfaces across the two levels of noise,and thus learn the displacement of each noisy point in order to recover the corresponding clean point.Comprehensive experiments demonstrate that our method achieves outstanding denoising results across various datasets with synthetic and real-world noise,obtaining better performance than previous unsupervised methods and competitive performance to current supervised methods.展开更多
Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronar...Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.展开更多
In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantl...In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantly degrades image quality.Traditional denoising methods,typically based on filter techniques,often face challenges related to inefficiency and limited adaptability.To address these limitations,this study proposes a novel SAR image denoising algorithm based on an enhanced residual network architecture,with the objective of enhancing the utility of SAR imagery in complex electromagnetic environments.The proposed algorithm integrates residual network modules,which directly process the noisy input images to generate denoised outputs.This approach not only reduces computational complexity but also mitigates the difficulties associated with model training.By combining the Transformer module with the residual block,the algorithm enhances the network's ability to extract global features,offering superior feature extraction capabilities compared to CNN-based residual modules.Additionally,the algorithm employs the adaptive activation function Meta-ACON,which dynamically adjusts the activation patterns of neurons,thereby improving the network's feature extraction efficiency.The effectiveness of the proposed denoising method is empirically validated using real SAR images from the RSOD dataset.The proposed algorithm exhibits remarkable performance in terms of EPI,SSIM,and ENL,while achieving a substantial enhancement in PSNR when compared to traditional and deep learning-based algorithms.The PSNR performance is enhanced by over twofold.Moreover,the evaluation of the MSTAR SAR dataset substantiates the algorithm's robustness and applicability in SAR denoising tasks,with a PSNR of 25.2021 being attained.These findings underscore the efficacy of the proposed algorithm in mitigating speckle noise while preserving critical features in SAR imagery,thereby enhancing its quality and usability in practical scenarios.展开更多
The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions a...The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy to lose detailed information.So we propose a rendered image denoising method with filtering guided by lighting information.First,we design an image segmentation algorithm based on lighting information to segment the image into different illumination areas.Then,we establish the parameter prediction model guided by lighting information for filtering(PGLF)to predict the filtering parameters of different illumination areas.For different illumination areas,we use these filtering parameters to construct area filters,and the filters are guided by the lighting information to perform sub-area filtering.Finally,the filtering results are fused with auxiliary features to output denoised images for improving the overall denoising effect of the image.Under the physically based rendering tool(PBRT)scene and Tungsten dataset,the experimental results show that compared with other guided filtering denoising methods,our method improves the peak signal-to-noise ratio(PSNR)metrics by 4.2164 dB on average and the structural similarity index(SSIM)metrics by 7.8%on average.This shows that our method can better reduce the noise in complex lighting scenesand improvethe imagequality.展开更多
The growing complexity of cyber threats requires innovative machine learning techniques,and image-based malware classification opens up new possibilities.Meanwhile,existing research has largely overlooked the impact o...The growing complexity of cyber threats requires innovative machine learning techniques,and image-based malware classification opens up new possibilities.Meanwhile,existing research has largely overlooked the impact of noise and obfuscation techniques commonly employed by malware authors to evade detection,and there is a critical gap in using noise simulation as a means of replicating real-world malware obfuscation techniques and adopting denoising framework to counteract these challenges.This study introduces an image denoising technique based on a U-Net combined with a GAN framework to address noise interference and obfuscation challenges in image-based malware analysis.The proposed methodology addresses existing classification limitations by introducing noise addition,which simulates obfuscated malware,and denoising strategies to restore robust image representations.To evaluate the approach,we used multiple CNN-based classifiers to assess noise resistance across architectures and datasets,measuring significant performance variation.Our denoising technique demonstrates remarkable performance improvements across two multi-class public datasets,MALIMG and BIG-15.For example,the MALIMG classification accuracy improved from 23.73%to 88.84%with denoising applied after Gaussian noise injection,demonstrating robustness.This approach contributes to improving malware detection by offering a robust framework for noise-resilient classification in noisy conditions.展开更多
To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions...To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions in the form of noise and artifacts,are kept to a bare minimum.The unexpected change realized during the acquisition process specifically attacks the integrity of the image’s quality,while indirectly attacking the effectiveness of the diagnostic process.It is thus crucial that this is attended to with maximum efficiency at the level of pertinent expertise.The solution to these challenges presents a complex dilemma at the acquisition stage,where image processing techniques must be adopted.The necessity of this mandatory image pre-processing step underpins the implementation of traditional state-of-the-art methods to create functional and robust denoising or recovery devices.This article hereby provides an extensive systematic review of the above techniques,with the purpose of presenting a systematic evaluation of their effect on medical images under three different distributions of noise,i.e.,Gaussian,Poisson,and Rician.A thorough analysis of these methods is conducted using eight evaluation parameters to highlight the unique features of each method.The covered denoising methods are essential in actual clinical scenarios where the preservation of anatomical details is crucial for accurate and safe diagnosis,such as tumor detection in MRI and vascular imaging in CT.展开更多
Automatically recognizing radar emitters from com-plex electromagnetic environments is important but non-trivial.Moreover,the changing electromagnetic environment results in inconsistent signal distribution in the rea...Automatically recognizing radar emitters from com-plex electromagnetic environments is important but non-trivial.Moreover,the changing electromagnetic environment results in inconsistent signal distribution in the real world,which makes the existing approaches perform poorly for recognition tasks in different scenes.In this paper,we propose a domain generaliza-tion framework is proposed to improve the adaptability of radar emitter signal recognition in changing environments.Specifically,we propose an end-to-end denoising based domain-invariant radar emitter recognition network(DDIRNet)consisting of a denoising model and a domain invariant representation learning model(IRLM),which mutually benefit from each other.For the signal denoising model,a loss function is proposed to match the feature of the radar signals and guarantee the effectiveness of the model.For the domain invariant representation learning model,contrastive learning is introduced to learn the cross-domain feature by aligning the source and unseen domain distri-bution.Moreover,we design a data augmentation method that improves the diversity of signal data for training.Extensive experiments on classification have shown that DDIRNet achieves up to 6.4%improvement compared with the state-of-the-art radar emitter recognition methods.The proposed method pro-vides a promising direction to solve the radar emitter signal recognition problem.展开更多
In modern industrial design trends featuring with integration,miniaturization,and versatility,there is a growing demand on the utilization of microstructural array devices.The measurement of such microstructural array...In modern industrial design trends featuring with integration,miniaturization,and versatility,there is a growing demand on the utilization of microstructural array devices.The measurement of such microstructural array components often encounters challenges due to the reduced scale and complex structures,either by contact or noncontact optical approaches.Among these microstructural arrays,there are still no optical measurement methods for micro corner-cube reflector arrays.To solve this problem,this study introduces a method for effectively eliminating coherent noise and achieving surface profile reconstruction in interference measurements of microstructural arrays.The proposed denoising method allows the calibration and inverse solving of system errors in the frequency domain by employing standard components with known surface types.This enables the effective compensation of the complex amplitude of non-sample coherent light within the interferometer optical path.The proposed surface reconstruction method enables the profile calculation within the situation that there is complex multi-reflection during the propagation of rays in microstructural arrays.Based on the measurement results,two novel metrics are defined to estimate diffraction errors at array junctions and comprehensive errors across multiple array elements,offering insights into other types of microstructure devices.This research not only addresses challenges of the coherent noise and multi-reflection,but also makes a breakthrough for quantitively optical interference measurement of microstructural array devices.展开更多
To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective clu...To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective cluster centers,a combination of density-based spatial clustering of applications with noise(DBSCAN)and Kmeans++is utilized.Subsequently,long short-term memory(LSTM)is employed to fit and yield optimized cluster centers with temporal information.Lastly,based on the new cluster centers and denoising ratio,a radius threshold is set,and noise points beyond this threshold are removed.The comprehensive denoising metrics F1_score of CBTDNN have achieved 0.8931,0.7735,and 0.9215 on the traffic sequences dataset,pedestrian detection dataset,and turntable dataset,respectively.And these metrics demonstrate improvements of 49.90%,33.07%,19.31%,and 22.97%compared to four contrastive algorithms,namely nearest neighbor(NNb),nearest neighbor with polarity(NNp),Autoencoder,and multilayer perceptron denoising filter(MLPF).These results demonstrate that the proposed method enhances the denoising performance of event-based sensors.展开更多
Terahertz imaging technology has great potential applications in areas,such as remote sensing,navigation,security checks,and so on.However,terahertz images usually have the problems of heavy noises and low resolution....Terahertz imaging technology has great potential applications in areas,such as remote sensing,navigation,security checks,and so on.However,terahertz images usually have the problems of heavy noises and low resolution.Previous terahertz image denoising methods are mainly based on traditional image processing methods,which have limited denoising effects on the terahertz noise.Existing deep learning-based image denoising methods are mostly used in natural images and easily cause a large amount of detail loss when denoising terahertz images.Here,a residual-learning-based multiscale hybridconvolution residual network(MHRNet)is proposed for terahertz image denoising,which can remove noises while preserving detail features in terahertz images.Specifically,a multiscale hybrid-convolution residual block(MHRB)is designed to extract rich detail features and local prediction residual noise from terahertz images.Specifically,MHRB is a residual structure composed of a multiscale dilated convolution block,a bottleneck layer,and a multiscale convolution block.MHRNet uses the MHRB and global residual learning to achieve terahertz image denoising.Ablation studies are performed to validate the effectiveness of MHRB.A series of experiments are conducted on the public terahertz image datasets.The experimental results demonstrate that MHRNet has an excellent denoising effect on synthetic and real noisy terahertz images.Compared with existing methods,MHRNet achieves comprehensive competitive results.展开更多
Geochemical survey data are essential across Earth Science disciplines but are often affected by noise,which can obscure important geological signals and compromise subsequent prediction and interpretation.Quantifying...Geochemical survey data are essential across Earth Science disciplines but are often affected by noise,which can obscure important geological signals and compromise subsequent prediction and interpretation.Quantifying prediction uncertainty is hence crucial for robust geoscientific decision-making.This study proposes a novel deep learning framework,the Spatially Constrained Variational Autoencoder(SC-VAE),for denoising geochemical survey data with integrated uncertainty quantification.The SC-VAE incorporates spatial regularization,which enforces spatial coherence by modeling inter-sample relationships directly within the latent space.The performance of the SC-VAE was systematically evaluated against a standard Variational Autoencoder(VAE)using geochemical data from the gold polymetallic district in the northwestern part of Sichuan Province,China.Both models were optimized using Bayesian optimization,with objective functions specifically designed to maintain essential geostatistical characteristics.Evaluation metrics include variogram analysis,quantitative measures of spatial interpolation accuracy,visual assessment of denoised maps,and statistical analysis of data distributions,as well as decomposition of uncertainties.Results show that the SC-VAE achieves superior noise suppression and better preservation of spatial structure compared to the standard VAE,as demonstrated by a significant reduction in the variogram nugget effect and an increased partial sill.The SC-VAE produces denoised maps with clearer anomaly delineation and more regularized data distributions,effectively mitigating outliers and reducing kurtosis.Additionally,it delivers improved interpolation accuracy and spatially explicit uncertainty estimates,facilitating more reliable and interpretable assessments of prediction confidence.The SC-VAE framework thus provides a robust,geostatistically informed solution for enhancing the quality and interpretability of geochemical data,with broad applicability in mineral exploration,environmental geochemistry,and other Earth Science domains.展开更多
Air target intent recognition holds significant importance in aiding commanders to assess battlefield situations and secure a competitive edge in decision-making.Progress in this domain has been hindered by challenges...Air target intent recognition holds significant importance in aiding commanders to assess battlefield situations and secure a competitive edge in decision-making.Progress in this domain has been hindered by challenges posed by imbalanced battlefield data and the limited robustness of traditional recognition models.Inspired by the success of diffusion models in addressing visual domain sample imbalances,this paper introduces a new approach that utilizes the Markov Transfer Field(MTF)method for time series data visualization.This visualization,when combined with the Denoising Diffusion Probabilistic Model(DDPM),effectively enhances sample data and mitigates noise within the original dataset.Additionally,a transformer-based model tailored for time series visualization and air target intent recognition is developed.Comprehensive experimental results,encompassing comparative,ablation,and denoising validations,reveal that the proposed method achieves a notable 98.86%accuracy in air target intent recognition while demonstrating exceptional robustness and generalization capabilities.This approach represents a promising avenue for advancing air target intent recognition.展开更多
Aberration-corrected annular dark-field scanning transmission electron microscopy(ADF-STEM)is a powerful tool for structural and chemical analysis of materials.Conventional analyses of ADF-STEM images rely on human la...Aberration-corrected annular dark-field scanning transmission electron microscopy(ADF-STEM)is a powerful tool for structural and chemical analysis of materials.Conventional analyses of ADF-STEM images rely on human labeling,making them labor-intensive and prone to subjective error.Here,we introduce a deep-learning-based workflow combining a pix2pix network for image denoising and either a mathematical algorithm local intensity threshold segmentation(LITS)or another deep learning network UNet for chemical identification.After denoising,the processed images exhibit a five-fold improvement in signal-to-noise ratio and a 20%increase in accuracy of atomic localization.Then,we take atomic-resolution images of Y–Ce dual-atom catalysts(DACs)and Fe-doped ReSe_(2) nanosheets as examples to validate the performance.Pix2pix is applied to identify atomic sites in Y–Ce DACs with a location recall of 0.88 and a location precision of 0.99.LITS is used to further differentiate Y and Ce sites by the intensity of atomic sites.Furthermore,pix2pix and UNet workflow with better automaticity is applied to identification of Fe-doped ReSe_(2) nanosheets.Three types of atomic sites(Re,the substitution of Fe for Re,and the adatom of Fe on Re)are distinguished with the identification recall of more than 0.90 and the precision of higher than 0.93.These results suggest that this strategy facilitates high-quality and automated chemical identification of atomic-resolution images.展开更多
The increasingly complex and interconnected train control information network is vulnerable to a variety of malicious traffic attacks,and the existing malicious traffic detection methods mainly rely on machine learnin...The increasingly complex and interconnected train control information network is vulnerable to a variety of malicious traffic attacks,and the existing malicious traffic detection methods mainly rely on machine learning,such as poor robustness,weak generalization,and a lack of ability to learn common features.Therefore,this paper proposes a malicious traffic identification method based on stacked sparse denoising autoencoders combined with a regularized extreme learning machine through particle swarm optimization.Firstly,the simulation environment of the Chinese train control system-3,was constructed for data acquisition.Then Pearson coefficient and other methods are used for pre-processing,then a stacked sparse denoising autoencoder is used to achieve nonlinear dimensionality reduction of features,and finally regularization extreme learning machine optimized by particle swarm optimization is used to achieve classification.Experimental data show that the proposed method has good training performance,with an average accuracy of 97.57%and a false negative rate of 2.43%,which is better than other alternative methods.In addition,ablation experiments were performed to evaluate the contribution of each component,and the results showed that the combination of methods was superior to individual methods.To further evaluate the generalization ability of the model in different scenarios,publicly available data sets of industrial control system networks were used.The results show that the model has robust detection capability in various types of network attacks.展开更多
Seismic data denoising is a critical process usually applied at various stages of the seismic processing workflow,as our ability to mitigate noise in seismic data affects the quality of our subsequent analyses.However...Seismic data denoising is a critical process usually applied at various stages of the seismic processing workflow,as our ability to mitigate noise in seismic data affects the quality of our subsequent analyses.However,finding an optimal balance between preserving seismic signals and effectively reducing seismic noise presents a substantial challenge.In this study,we introduce a multi-stage deep learning model,trained in a self-supervised manner,designed specifically to suppress seismic noise while minimizing signal leakage.This model operates as a patch-based approach,extracting overlapping patches from the noisy data and converting them into 1D vectors for input.It consists of two identical sub-networks,each configured differently.Inspired by the transformer architecture,each sub-network features an embedded block that comprises two fully connected layers,which are utilized for feature extraction from the input patches.After reshaping,a multi-head attention module enhances the model’s focus on significant features by assigning higher attention weights to them.The key difference between the two sub-networks lies in the number of neurons within their fully connected layers.The first sub-network serves as a strong denoiser with a small number of neurons,effectively attenuating seismic noise;in contrast,the second sub-network functions as a signal-add-back model,using a larger number of neurons to retrieve some of the signal that was not preserved in the output of the first sub-network.The proposed model produces two outputs,each corresponding to one of the sub-networks,and both sub-networks are optimized simultaneously using the noisy data as the label for both outputs.Evaluations conducted on both synthetic and field data demonstrate the model’s effectiveness in suppressing seismic noise with minimal signal leakage,outperforming some benchmark methods.展开更多
Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse de...Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse denoising process to distill building distribution from these complex backgrounds.Building on this concept,we propose a novel framework,building extraction diffusion model(BEDiff),which meticulously refines the extraction of building footprints from remote sensing images in a stepwise fashion.Our approach begins with the design of booster guidance,a mechanism that extracts structural and semantic features from remote sensing images to serve as priors,thereby providing targeted guidance for the diffusion process.Additionally,we introduce a cross-feature fusion module(CFM)that bridges the semantic gap between different types of features,facilitating the integration of the attributes extracted by booster guidance into the diffusion process more effectively.Our proposed BEDiff marks the first application of diffusion models to the task of building extraction.Empirical evidence from extensive experiments on the Beijing building dataset demonstrates the superior performance of BEDiff,affirming its effectiveness and potential for enhancing the accuracy of building extraction in complex urban landscapes.展开更多
A coal-loaded charge induction monitoring system is developed to effectively forecast the dynamic disasters caused by coal failure.Specifically,a digital finite impulse response(FIR)filter is designed to denoise and f...A coal-loaded charge induction monitoring system is developed to effectively forecast the dynamic disasters caused by coal failure.Specifically,a digital finite impulse response(FIR)filter is designed to denoise and filter the signal,and the time-frequency domain evolution of induced charge signals is analyzed during coal failure experiments.The quantitative relationships between the induced electric charge and stress-strain energy,and ultimately,between induced electric charge and coal deformation/failure,are revealed.Ultimately,the electric charge sensor exhibits high signal collection frequency and high sensitivity,and the FIR low-pass filter constructed in MATLAB effectively denoises and filters induced charge signals.The main frequency range of the white noise is 50-500 Hz,and the main frequency of the charge signal induced by coal deformation and failure is concentrated in the range of 0-50 Hz.The optimal distances for monitoring cubic and cylindrical raw coal samples using this sensor are 9 mm and 11 mm,respectively.Notably,strain energy is released faster when it can dissipate more readily,and induced charge pulses become denser when more intense signals produce large fluctuations.A method is proposed to identify coal deformation and failure based on changes in the induced electric charge.This study provides a new means of monitoring the early warning signs of dynamic coal mine disasters.Based on our experimental results and conclusions,a new method is proposed to identify coal deformation and failure based on changes in the induced electric charge.The precursor to the moment of coal failure can be identified by monitoring the amplitude of the induced charge,the dynamic trend of fluctuation,and the cumulative number of induced electric charge pulses during the process of coal deformation.展开更多
Low-field nuclear magnetic resonance(NMR)has broad application prospects in the explo-ration and development of unconventional oil and gas reservoirs.However,NMR instruments tend to acquire echo signals with relativel...Low-field nuclear magnetic resonance(NMR)has broad application prospects in the explo-ration and development of unconventional oil and gas reservoirs.However,NMR instruments tend to acquire echo signals with relatively low signal-to-noise ratio(SNR),resulting in poor accuracy of T2 spectrum inversion.It is crucial to preprocess the low SNR data with denoising methods before inversion.In this paper,a hybrid NMR data denoising method combining empirical mode decomposition-singular value decomposition(EMD-SVD)was proposed.Firstly,the echo data were decomposed with the EMD method to low-and high-frequency intrinsic mode function(IMF)components as well as a residual.Next,the SVD method was employed for the high-frequency IMF components denoising.Finally,the low-frequency IMF components,the denoised high-frequency IMF components,and the residual are summed to form the denoised signal.To validate the effectiveness and feasibility of the EMD-SVDmethod,numerical simulations,experimental data,and NMR log data processingwere conducted.The results indicate that the inverted NMR spectra with the EMD-SVD denoising method exhibit higher quality compared to the EMD method and the SVD method.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.12175183)。
文摘Gamma-ray imaging systems are powerful tools in radiographic diagnosis.However,the recorded images suffer from degradations such as noise,blurring,and downsampling,consequently failing to meet high-precision diagnostic requirements.In this paper,we propose a novel single-image super-resolution algorithm to enhance the spatial resolution of gamma-ray imaging systems.A mathematical model of the gamma-ray imaging system is established based on maximum a posteriori estimation.Within the plug-and-play framework,the half-quadratic splitting method is employed to decouple the data fidelit term and the regularization term.An image denoiser using convolutional neural networks is adopted as an implicit image prior,referred to as a deep denoiser prior,eliminating the need to explicitly design a regularization term.Furthermore,the impact of the image boundary condition on reconstruction results is considered,and a method for estimating image boundaries is introduced.The results show that the proposed algorithm can effectively addresses boundary artifacts.By increasing the pixel number of the reconstructed images,the proposed algorithm is capable of recovering more details.Notably,in both simulation and real experiments,the proposed algorithm is demonstrated to achieve subpixel resolution,surpassing the Nyquist sampling limit determined by the camera pixel size.
基金supported by the National Natural Science Foundation of China(No.61271014)the Specialized Research Fund for the Doctoral Program of Higher Education(No.20124301110003)the Graduated Students Innovation Fund of Hunan Province(No.CX2012B238)
文摘A new first-order optimality condition for the basis pursuit denoise (BPDN) problem is derived. This condition provides a new approach to choose the penalty param- eters adaptively for a fixed point iteration algorithm. Meanwhile, the result is extended to matrix completion which is a new field on the heel of the compressed sensing. The numerical experiments of sparse vector recovery and low-rank matrix completion show validity of the theoretic results.
文摘Existing deep learning-based point cloud denoising methods are generally trained in a supervised manner that requires clean data as ground-truth labels.However,in practice,it is not always feasible to obtain clean point clouds.In this paper,we introduce a novel unsupervised point cloud denoising method that eliminates the need to use clean point clouds as groundtruth labels during training.We demonstrate that it is feasible for neural networks to only take noisy point clouds as input,and learn to approximate and restore their clean versions.In particular,we generate two noise levels for the original point clouds,requiring the second noise level to be twice the amount of the first noise level.With this,we can deduce the relationship between the displacement information that recovers the clean surfaces across the two levels of noise,and thus learn the displacement of each noisy point in order to recover the corresponding clean point.Comprehensive experiments demonstrate that our method achieves outstanding denoising results across various datasets with synthetic and real-world noise,obtaining better performance than previous unsupervised methods and competitive performance to current supervised methods.
基金the Research Grant of Kwangwoon University in 2024.
文摘Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.
文摘In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantly degrades image quality.Traditional denoising methods,typically based on filter techniques,often face challenges related to inefficiency and limited adaptability.To address these limitations,this study proposes a novel SAR image denoising algorithm based on an enhanced residual network architecture,with the objective of enhancing the utility of SAR imagery in complex electromagnetic environments.The proposed algorithm integrates residual network modules,which directly process the noisy input images to generate denoised outputs.This approach not only reduces computational complexity but also mitigates the difficulties associated with model training.By combining the Transformer module with the residual block,the algorithm enhances the network's ability to extract global features,offering superior feature extraction capabilities compared to CNN-based residual modules.Additionally,the algorithm employs the adaptive activation function Meta-ACON,which dynamically adjusts the activation patterns of neurons,thereby improving the network's feature extraction efficiency.The effectiveness of the proposed denoising method is empirically validated using real SAR images from the RSOD dataset.The proposed algorithm exhibits remarkable performance in terms of EPI,SSIM,and ENL,while achieving a substantial enhancement in PSNR when compared to traditional and deep learning-based algorithms.The PSNR performance is enhanced by over twofold.Moreover,the evaluation of the MSTAR SAR dataset substantiates the algorithm's robustness and applicability in SAR denoising tasks,with a PSNR of 25.2021 being attained.These findings underscore the efficacy of the proposed algorithm in mitigating speckle noise while preserving critical features in SAR imagery,thereby enhancing its quality and usability in practical scenarios.
基金supported by the National Natural Science(No.U19A2063)the Jilin Provincial Development Program of Science and Technology (No.20230201080GX)the Jilin Province Education Department Scientific Research Project (No.JJKH20230851KJ)。
文摘The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy to lose detailed information.So we propose a rendered image denoising method with filtering guided by lighting information.First,we design an image segmentation algorithm based on lighting information to segment the image into different illumination areas.Then,we establish the parameter prediction model guided by lighting information for filtering(PGLF)to predict the filtering parameters of different illumination areas.For different illumination areas,we use these filtering parameters to construct area filters,and the filters are guided by the lighting information to perform sub-area filtering.Finally,the filtering results are fused with auxiliary features to output denoised images for improving the overall denoising effect of the image.Under the physically based rendering tool(PBRT)scene and Tungsten dataset,the experimental results show that compared with other guided filtering denoising methods,our method improves the peak signal-to-noise ratio(PSNR)metrics by 4.2164 dB on average and the structural similarity index(SSIM)metrics by 7.8%on average.This shows that our method can better reduce the noise in complex lighting scenesand improvethe imagequality.
文摘The growing complexity of cyber threats requires innovative machine learning techniques,and image-based malware classification opens up new possibilities.Meanwhile,existing research has largely overlooked the impact of noise and obfuscation techniques commonly employed by malware authors to evade detection,and there is a critical gap in using noise simulation as a means of replicating real-world malware obfuscation techniques and adopting denoising framework to counteract these challenges.This study introduces an image denoising technique based on a U-Net combined with a GAN framework to address noise interference and obfuscation challenges in image-based malware analysis.The proposed methodology addresses existing classification limitations by introducing noise addition,which simulates obfuscated malware,and denoising strategies to restore robust image representations.To evaluate the approach,we used multiple CNN-based classifiers to assess noise resistance across architectures and datasets,measuring significant performance variation.Our denoising technique demonstrates remarkable performance improvements across two multi-class public datasets,MALIMG and BIG-15.For example,the MALIMG classification accuracy improved from 23.73%to 88.84%with denoising applied after Gaussian noise injection,demonstrating robustness.This approach contributes to improving malware detection by offering a robust framework for noise-resilient classification in noisy conditions.
文摘To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions in the form of noise and artifacts,are kept to a bare minimum.The unexpected change realized during the acquisition process specifically attacks the integrity of the image’s quality,while indirectly attacking the effectiveness of the diagnostic process.It is thus crucial that this is attended to with maximum efficiency at the level of pertinent expertise.The solution to these challenges presents a complex dilemma at the acquisition stage,where image processing techniques must be adopted.The necessity of this mandatory image pre-processing step underpins the implementation of traditional state-of-the-art methods to create functional and robust denoising or recovery devices.This article hereby provides an extensive systematic review of the above techniques,with the purpose of presenting a systematic evaluation of their effect on medical images under three different distributions of noise,i.e.,Gaussian,Poisson,and Rician.A thorough analysis of these methods is conducted using eight evaluation parameters to highlight the unique features of each method.The covered denoising methods are essential in actual clinical scenarios where the preservation of anatomical details is crucial for accurate and safe diagnosis,such as tumor detection in MRI and vascular imaging in CT.
基金supported by the National Natural Science Foundation of China(62101575)the Research Project of NUDT(ZK22-57)the Self-directed Project of State Key Laboratory of High Performance Computing(202101-16).
文摘Automatically recognizing radar emitters from com-plex electromagnetic environments is important but non-trivial.Moreover,the changing electromagnetic environment results in inconsistent signal distribution in the real world,which makes the existing approaches perform poorly for recognition tasks in different scenes.In this paper,we propose a domain generaliza-tion framework is proposed to improve the adaptability of radar emitter signal recognition in changing environments.Specifically,we propose an end-to-end denoising based domain-invariant radar emitter recognition network(DDIRNet)consisting of a denoising model and a domain invariant representation learning model(IRLM),which mutually benefit from each other.For the signal denoising model,a loss function is proposed to match the feature of the radar signals and guarantee the effectiveness of the model.For the domain invariant representation learning model,contrastive learning is introduced to learn the cross-domain feature by aligning the source and unseen domain distri-bution.Moreover,we design a data augmentation method that improves the diversity of signal data for training.Extensive experiments on classification have shown that DDIRNet achieves up to 6.4%improvement compared with the state-of-the-art radar emitter recognition methods.The proposed method pro-vides a promising direction to solve the radar emitter signal recognition problem.
基金Supported by National Natural Science Foundation of China(Grant Nos.52375414,52075100)Shanghai Science and Technology Committee Innovation Grant of China(Grant No.23ZR1404200).
文摘In modern industrial design trends featuring with integration,miniaturization,and versatility,there is a growing demand on the utilization of microstructural array devices.The measurement of such microstructural array components often encounters challenges due to the reduced scale and complex structures,either by contact or noncontact optical approaches.Among these microstructural arrays,there are still no optical measurement methods for micro corner-cube reflector arrays.To solve this problem,this study introduces a method for effectively eliminating coherent noise and achieving surface profile reconstruction in interference measurements of microstructural arrays.The proposed denoising method allows the calibration and inverse solving of system errors in the frequency domain by employing standard components with known surface types.This enables the effective compensation of the complex amplitude of non-sample coherent light within the interferometer optical path.The proposed surface reconstruction method enables the profile calculation within the situation that there is complex multi-reflection during the propagation of rays in microstructural arrays.Based on the measurement results,two novel metrics are defined to estimate diffraction errors at array junctions and comprehensive errors across multiple array elements,offering insights into other types of microstructure devices.This research not only addresses challenges of the coherent noise and multi-reflection,but also makes a breakthrough for quantitively optical interference measurement of microstructural array devices.
基金supported by the National Natural Science Foundation of China(No.62134004).
文摘To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective cluster centers,a combination of density-based spatial clustering of applications with noise(DBSCAN)and Kmeans++is utilized.Subsequently,long short-term memory(LSTM)is employed to fit and yield optimized cluster centers with temporal information.Lastly,based on the new cluster centers and denoising ratio,a radius threshold is set,and noise points beyond this threshold are removed.The comprehensive denoising metrics F1_score of CBTDNN have achieved 0.8931,0.7735,and 0.9215 on the traffic sequences dataset,pedestrian detection dataset,and turntable dataset,respectively.And these metrics demonstrate improvements of 49.90%,33.07%,19.31%,and 22.97%compared to four contrastive algorithms,namely nearest neighbor(NNb),nearest neighbor with polarity(NNp),Autoencoder,and multilayer perceptron denoising filter(MLPF).These results demonstrate that the proposed method enhances the denoising performance of event-based sensors.
基金National Natural Science Foundation of China,Grant/Award Number:62173098,62104047Guangdong Provincial Key Laboratory of Cyber-Physical System,Grant/Award Number:2020B1212060069。
文摘Terahertz imaging technology has great potential applications in areas,such as remote sensing,navigation,security checks,and so on.However,terahertz images usually have the problems of heavy noises and low resolution.Previous terahertz image denoising methods are mainly based on traditional image processing methods,which have limited denoising effects on the terahertz noise.Existing deep learning-based image denoising methods are mostly used in natural images and easily cause a large amount of detail loss when denoising terahertz images.Here,a residual-learning-based multiscale hybridconvolution residual network(MHRNet)is proposed for terahertz image denoising,which can remove noises while preserving detail features in terahertz images.Specifically,a multiscale hybrid-convolution residual block(MHRB)is designed to extract rich detail features and local prediction residual noise from terahertz images.Specifically,MHRB is a residual structure composed of a multiscale dilated convolution block,a bottleneck layer,and a multiscale convolution block.MHRNet uses the MHRB and global residual learning to achieve terahertz image denoising.Ablation studies are performed to validate the effectiveness of MHRB.A series of experiments are conducted on the public terahertz image datasets.The experimental results demonstrate that MHRNet has an excellent denoising effect on synthetic and real noisy terahertz images.Compared with existing methods,MHRNet achieves comprehensive competitive results.
基金supported by the National Natural Science Foundation of China(Nos.42530801,42425208)the Natural Science Foundation of Hubei Province(China)(No.2023AFA001)+1 种基金the MOST Special Fund from State Key Laboratory of Geological Processes and Mineral Resources,China University of Geosciences(No.MSFGPMR2025-401)the China Scholarship Council(No.202306410181)。
文摘Geochemical survey data are essential across Earth Science disciplines but are often affected by noise,which can obscure important geological signals and compromise subsequent prediction and interpretation.Quantifying prediction uncertainty is hence crucial for robust geoscientific decision-making.This study proposes a novel deep learning framework,the Spatially Constrained Variational Autoencoder(SC-VAE),for denoising geochemical survey data with integrated uncertainty quantification.The SC-VAE incorporates spatial regularization,which enforces spatial coherence by modeling inter-sample relationships directly within the latent space.The performance of the SC-VAE was systematically evaluated against a standard Variational Autoencoder(VAE)using geochemical data from the gold polymetallic district in the northwestern part of Sichuan Province,China.Both models were optimized using Bayesian optimization,with objective functions specifically designed to maintain essential geostatistical characteristics.Evaluation metrics include variogram analysis,quantitative measures of spatial interpolation accuracy,visual assessment of denoised maps,and statistical analysis of data distributions,as well as decomposition of uncertainties.Results show that the SC-VAE achieves superior noise suppression and better preservation of spatial structure compared to the standard VAE,as demonstrated by a significant reduction in the variogram nugget effect and an increased partial sill.The SC-VAE produces denoised maps with clearer anomaly delineation and more regularized data distributions,effectively mitigating outliers and reducing kurtosis.Additionally,it delivers improved interpolation accuracy and spatially explicit uncertainty estimates,facilitating more reliable and interpretable assessments of prediction confidence.The SC-VAE framework thus provides a robust,geostatistically informed solution for enhancing the quality and interpretability of geochemical data,with broad applicability in mineral exploration,environmental geochemistry,and other Earth Science domains.
基金co-supported by the National Natural Science Foundation of China(Nos.61806219,61876189 and 61703426)the Young Talent Fund of University Association for Science and Technology in Shaanxi,China(Nos.20190108 and 20220106)the Innvation Talent Supporting Project of Shaanxi,China(No.2020KJXX-065)。
文摘Air target intent recognition holds significant importance in aiding commanders to assess battlefield situations and secure a competitive edge in decision-making.Progress in this domain has been hindered by challenges posed by imbalanced battlefield data and the limited robustness of traditional recognition models.Inspired by the success of diffusion models in addressing visual domain sample imbalances,this paper introduces a new approach that utilizes the Markov Transfer Field(MTF)method for time series data visualization.This visualization,when combined with the Denoising Diffusion Probabilistic Model(DDPM),effectively enhances sample data and mitigates noise within the original dataset.Additionally,a transformer-based model tailored for time series visualization and air target intent recognition is developed.Comprehensive experimental results,encompassing comparative,ablation,and denoising validations,reveal that the proposed method achieves a notable 98.86%accuracy in air target intent recognition while demonstrating exceptional robustness and generalization capabilities.This approach represents a promising avenue for advancing air target intent recognition.
基金supported by the National Key Research and Development Program of China(2022YFA1505700)National Natural Science Foundation of China(22475214 and 22205232)+2 种基金Talent Plan of Shanghai Branch,Chinese Academy of Sciences(CASSHB-QNPD-2023-020)Natural Science Foundation of Fujian Province(2023J06044)the Self-deployment Project Research Program of Haixi Institutes,Chinese Academy of Sciences(CXZX-2022-JQ06 and CXZX-2022-GH03)。
文摘Aberration-corrected annular dark-field scanning transmission electron microscopy(ADF-STEM)is a powerful tool for structural and chemical analysis of materials.Conventional analyses of ADF-STEM images rely on human labeling,making them labor-intensive and prone to subjective error.Here,we introduce a deep-learning-based workflow combining a pix2pix network for image denoising and either a mathematical algorithm local intensity threshold segmentation(LITS)or another deep learning network UNet for chemical identification.After denoising,the processed images exhibit a five-fold improvement in signal-to-noise ratio and a 20%increase in accuracy of atomic localization.Then,we take atomic-resolution images of Y–Ce dual-atom catalysts(DACs)and Fe-doped ReSe_(2) nanosheets as examples to validate the performance.Pix2pix is applied to identify atomic sites in Y–Ce DACs with a location recall of 0.88 and a location precision of 0.99.LITS is used to further differentiate Y and Ce sites by the intensity of atomic sites.Furthermore,pix2pix and UNet workflow with better automaticity is applied to identification of Fe-doped ReSe_(2) nanosheets.Three types of atomic sites(Re,the substitution of Fe for Re,and the adatom of Fe on Re)are distinguished with the identification recall of more than 0.90 and the precision of higher than 0.93.These results suggest that this strategy facilitates high-quality and automated chemical identification of atomic-resolution images.
文摘The increasingly complex and interconnected train control information network is vulnerable to a variety of malicious traffic attacks,and the existing malicious traffic detection methods mainly rely on machine learning,such as poor robustness,weak generalization,and a lack of ability to learn common features.Therefore,this paper proposes a malicious traffic identification method based on stacked sparse denoising autoencoders combined with a regularized extreme learning machine through particle swarm optimization.Firstly,the simulation environment of the Chinese train control system-3,was constructed for data acquisition.Then Pearson coefficient and other methods are used for pre-processing,then a stacked sparse denoising autoencoder is used to achieve nonlinear dimensionality reduction of features,and finally regularization extreme learning machine optimized by particle swarm optimization is used to achieve classification.Experimental data show that the proposed method has good training performance,with an average accuracy of 97.57%and a false negative rate of 2.43%,which is better than other alternative methods.In addition,ablation experiments were performed to evaluate the contribution of each component,and the results showed that the combination of methods was superior to individual methods.To further evaluate the generalization ability of the model in different scenarios,publicly available data sets of industrial control system networks were used.The results show that the model has robust detection capability in various types of network attacks.
基金supported by the King Abdullah University of Science and Technology(KAUST)。
文摘Seismic data denoising is a critical process usually applied at various stages of the seismic processing workflow,as our ability to mitigate noise in seismic data affects the quality of our subsequent analyses.However,finding an optimal balance between preserving seismic signals and effectively reducing seismic noise presents a substantial challenge.In this study,we introduce a multi-stage deep learning model,trained in a self-supervised manner,designed specifically to suppress seismic noise while minimizing signal leakage.This model operates as a patch-based approach,extracting overlapping patches from the noisy data and converting them into 1D vectors for input.It consists of two identical sub-networks,each configured differently.Inspired by the transformer architecture,each sub-network features an embedded block that comprises two fully connected layers,which are utilized for feature extraction from the input patches.After reshaping,a multi-head attention module enhances the model’s focus on significant features by assigning higher attention weights to them.The key difference between the two sub-networks lies in the number of neurons within their fully connected layers.The first sub-network serves as a strong denoiser with a small number of neurons,effectively attenuating seismic noise;in contrast,the second sub-network functions as a signal-add-back model,using a larger number of neurons to retrieve some of the signal that was not preserved in the output of the first sub-network.The proposed model produces two outputs,each corresponding to one of the sub-networks,and both sub-networks are optimized simultaneously using the noisy data as the label for both outputs.Evaluations conducted on both synthetic and field data demonstrate the model’s effectiveness in suppressing seismic noise with minimal signal leakage,outperforming some benchmark methods.
基金supported by the National Natural Science Foundation of China(Nos.61906168,62202429 and 62272267)the Zhejiang Provincial Natural Science Foundation of China(No.LY23F020023)the Construction of Hubei Provincial Key Laboratory for Intelligent Visual Monitoring of Hydropower Projects(No.2022SDSJ01)。
文摘Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse denoising process to distill building distribution from these complex backgrounds.Building on this concept,we propose a novel framework,building extraction diffusion model(BEDiff),which meticulously refines the extraction of building footprints from remote sensing images in a stepwise fashion.Our approach begins with the design of booster guidance,a mechanism that extracts structural and semantic features from remote sensing images to serve as priors,thereby providing targeted guidance for the diffusion process.Additionally,we introduce a cross-feature fusion module(CFM)that bridges the semantic gap between different types of features,facilitating the integration of the attributes extracted by booster guidance into the diffusion process more effectively.Our proposed BEDiff marks the first application of diffusion models to the task of building extraction.Empirical evidence from extensive experiments on the Beijing building dataset demonstrates the superior performance of BEDiff,affirming its effectiveness and potential for enhancing the accuracy of building extraction in complex urban landscapes.
基金supported by the National Key Research and Development Project of the National Natural Science Foundation(Grant No.2022YFC3004605)the National Natural Science Foundation of China Youth Science Fund(Grant No.52104087).
文摘A coal-loaded charge induction monitoring system is developed to effectively forecast the dynamic disasters caused by coal failure.Specifically,a digital finite impulse response(FIR)filter is designed to denoise and filter the signal,and the time-frequency domain evolution of induced charge signals is analyzed during coal failure experiments.The quantitative relationships between the induced electric charge and stress-strain energy,and ultimately,between induced electric charge and coal deformation/failure,are revealed.Ultimately,the electric charge sensor exhibits high signal collection frequency and high sensitivity,and the FIR low-pass filter constructed in MATLAB effectively denoises and filters induced charge signals.The main frequency range of the white noise is 50-500 Hz,and the main frequency of the charge signal induced by coal deformation and failure is concentrated in the range of 0-50 Hz.The optimal distances for monitoring cubic and cylindrical raw coal samples using this sensor are 9 mm and 11 mm,respectively.Notably,strain energy is released faster when it can dissipate more readily,and induced charge pulses become denser when more intense signals produce large fluctuations.A method is proposed to identify coal deformation and failure based on changes in the induced electric charge.This study provides a new means of monitoring the early warning signs of dynamic coal mine disasters.Based on our experimental results and conclusions,a new method is proposed to identify coal deformation and failure based on changes in the induced electric charge.The precursor to the moment of coal failure can be identified by monitoring the amplitude of the induced charge,the dynamic trend of fluctuation,and the cumulative number of induced electric charge pulses during the process of coal deformation.
基金supported by the National Natural Science Foundation of China(grant no.42304118)the Young Elite Scientist Sponsorship Program by BAST(grant no.BYESS2023027)the Science Foundation of China University of Petroleum,Beijing(grant no.2462022QNXZ001).
文摘Low-field nuclear magnetic resonance(NMR)has broad application prospects in the explo-ration and development of unconventional oil and gas reservoirs.However,NMR instruments tend to acquire echo signals with relatively low signal-to-noise ratio(SNR),resulting in poor accuracy of T2 spectrum inversion.It is crucial to preprocess the low SNR data with denoising methods before inversion.In this paper,a hybrid NMR data denoising method combining empirical mode decomposition-singular value decomposition(EMD-SVD)was proposed.Firstly,the echo data were decomposed with the EMD method to low-and high-frequency intrinsic mode function(IMF)components as well as a residual.Next,the SVD method was employed for the high-frequency IMF components denoising.Finally,the low-frequency IMF components,the denoised high-frequency IMF components,and the residual are summed to form the denoised signal.To validate the effectiveness and feasibility of the EMD-SVDmethod,numerical simulations,experimental data,and NMR log data processingwere conducted.The results indicate that the inverted NMR spectra with the EMD-SVD denoising method exhibit higher quality compared to the EMD method and the SVD method.