The initial noise present in the depth images obtained with RGB-D sensors is a combination of hardware limitations in addition to the environmental factors,due to the limited capabilities of sensors,which also produce...The initial noise present in the depth images obtained with RGB-D sensors is a combination of hardware limitations in addition to the environmental factors,due to the limited capabilities of sensors,which also produce poor computer vision results.The common image denoising techniques tend to remove significant image details and also remove noise,provided they are based on space and frequency filtering.The updated framework presented in this paper is a novel denoising model that makes use of Boruta-driven feature selection using a Long Short-Term Memory Autoencoder(LSTMAE).The Boruta algorithm identifies the most useful depth features that are used to maximize the spatial structure integrity and reduce redundancy.An LSTMAE is then used to process these selected features and model depth pixel sequences to generate robust,noise-resistant representations.The system uses the encoder to encode the input data into a latent space that has been compressed before it is decoded to retrieve the clean image.Experiments on a benchmark data set show that the suggested technique attains a PSNR of 45 dB and an SSIM of 0.90,which is 10 dB higher than the performance of conventional convolutional autoencoders and 15 times higher than that of the wavelet-based models.Moreover,the feature selection step will decrease the input dimensionality by 40%,resulting in a 37.5%reduction in training time and a real-time inference rate of 200 FPS.Boruta-LSTMAE framework,therefore,offers a highly efficient and scalable system for depth image denoising,with a high potential to be applied to close-range 3D systems,such as robotic manipulation and gesture-based interfaces.展开更多
The cemented tailings backfill(CTB)with initial defects is more prone to destabilization damage under the influence of various unfavorable factors during the mining process.In order to investigate its influence on the...The cemented tailings backfill(CTB)with initial defects is more prone to destabilization damage under the influence of various unfavorable factors during the mining process.In order to investigate its influence on the stability of underground mining engineering,this paper simulates the generation of different degrees of initial defects inside the CTB by adding different contents of air-entraining agent(AEA),investigates the acoustic emission RA/AF eigenvalues of CTB with different contents of AEA under uniaxial compression,and adopts various denoising algorithms(e.g.,moving average smoothing,median filtering,and outlier detection)to improve the accuracy of the data.The variance and autocorrelation coefficients of RA/AF parameters were analyzed in conjunction with the critical slowing down(CSD)theory.The results show that the acoustic emission RA/AF values can be used to characterize the progressive damage evolution of CTB.The denoising algorithm processed the AE signals to reduce the effects of extraneous noise and anomalous spikes.Changes in the variance curves provide clear precursor information,while abrupt changes in the autocorrelation coefficient can be used as an auxiliary localization warning signal.The phenomenon of dramatic increase in the variance and autocorrelation coefficient curves during the compression-tightening stage,which is influenced by the initial defects,can lead to false warnings.As the initial defects of the CTB increase,its instability precursor time and instability time are prolonged,the peak stress decreases,and the time difference between the CTB and the instability damage is smaller.The results provide a new method for real-time monitoring and early warning of CTB instability damage.展开更多
The internal flow fields within a three-dimensional inward-tunning combined inlet are extremely complex,especially during the engine mode transition,where the tunnel changes may impact the flow fields significantly.To...The internal flow fields within a three-dimensional inward-tunning combined inlet are extremely complex,especially during the engine mode transition,where the tunnel changes may impact the flow fields significantly.To develop an efficient flow field reconstruction model for this,we present an Improved Conditional Denoising Diffusion Generative Adversarial Network(ICDDGAN),which integrates Conditional Denoising Diffusion Probabilistic Models(CDDPMs)with Style GAN,and introduce a reconstruction discrimination mechanism and dynamic loss weight learning strategy.We establish the Mach number flow field dataset by numerical simulation at various backpressures for the mode transition process from turbine mode to ejector ramjet mode at Mach number 2.5.The proposed ICDDGAN model,given only sparse parameter information,can rapidly generate high-quality Mach number flow fields without a large number of samples for training.The results show that ICDDGAN is superior to CDDGAN in terms of training convergence and stability.Moreover,the interpolation and extrapolation test results during backpressure conditions show that ICDDGAN can accurately and quickly reconstruct Mach number fields at various tunnel slice shapes,with a Structural Similarity Index Measure(SSIM)of over 0.96 and a Mean-Square Error(MSE)of 0.035%to actual flow fields,reducing time costs by 7-8 orders of magnitude compared to Computational Fluid Dynamics(CFD)calculations.This can provide an efficient means for rapid computation of complex flow fields.展开更多
The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions a...The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy to lose detailed information.So we propose a rendered image denoising method with filtering guided by lighting information.First,we design an image segmentation algorithm based on lighting information to segment the image into different illumination areas.Then,we establish the parameter prediction model guided by lighting information for filtering(PGLF)to predict the filtering parameters of different illumination areas.For different illumination areas,we use these filtering parameters to construct area filters,and the filters are guided by the lighting information to perform sub-area filtering.Finally,the filtering results are fused with auxiliary features to output denoised images for improving the overall denoising effect of the image.Under the physically based rendering tool(PBRT)scene and Tungsten dataset,the experimental results show that compared with other guided filtering denoising methods,our method improves the peak signal-to-noise ratio(PSNR)metrics by 4.2164 dB on average and the structural similarity index(SSIM)metrics by 7.8%on average.This shows that our method can better reduce the noise in complex lighting scenesand improvethe imagequality.展开更多
Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronar...Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.展开更多
Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse de...Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse denoising process to distill building distribution from these complex backgrounds.Building on this concept,we propose a novel framework,building extraction diffusion model(BEDiff),which meticulously refines the extraction of building footprints from remote sensing images in a stepwise fashion.Our approach begins with the design of booster guidance,a mechanism that extracts structural and semantic features from remote sensing images to serve as priors,thereby providing targeted guidance for the diffusion process.Additionally,we introduce a cross-feature fusion module(CFM)that bridges the semantic gap between different types of features,facilitating the integration of the attributes extracted by booster guidance into the diffusion process more effectively.Our proposed BEDiff marks the first application of diffusion models to the task of building extraction.Empirical evidence from extensive experiments on the Beijing building dataset demonstrates the superior performance of BEDiff,affirming its effectiveness and potential for enhancing the accuracy of building extraction in complex urban landscapes.展开更多
In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantl...In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantly degrades image quality.Traditional denoising methods,typically based on filter techniques,often face challenges related to inefficiency and limited adaptability.To address these limitations,this study proposes a novel SAR image denoising algorithm based on an enhanced residual network architecture,with the objective of enhancing the utility of SAR imagery in complex electromagnetic environments.The proposed algorithm integrates residual network modules,which directly process the noisy input images to generate denoised outputs.This approach not only reduces computational complexity but also mitigates the difficulties associated with model training.By combining the Transformer module with the residual block,the algorithm enhances the network's ability to extract global features,offering superior feature extraction capabilities compared to CNN-based residual modules.Additionally,the algorithm employs the adaptive activation function Meta-ACON,which dynamically adjusts the activation patterns of neurons,thereby improving the network's feature extraction efficiency.The effectiveness of the proposed denoising method is empirically validated using real SAR images from the RSOD dataset.The proposed algorithm exhibits remarkable performance in terms of EPI,SSIM,and ENL,while achieving a substantial enhancement in PSNR when compared to traditional and deep learning-based algorithms.The PSNR performance is enhanced by over twofold.Moreover,the evaluation of the MSTAR SAR dataset substantiates the algorithm's robustness and applicability in SAR denoising tasks,with a PSNR of 25.2021 being attained.These findings underscore the efficacy of the proposed algorithm in mitigating speckle noise while preserving critical features in SAR imagery,thereby enhancing its quality and usability in practical scenarios.展开更多
To address the issues of peak overlap caused by complex matrices in agricultural product terahertz(THz)spectral signals and the dynamic,nonlinear interference induced by environmental and system noise,this study explo...To address the issues of peak overlap caused by complex matrices in agricultural product terahertz(THz)spectral signals and the dynamic,nonlinear interference induced by environmental and system noise,this study explores the feasibility of adaptive-signal-decomposition-based denoising methods to improve THz spectral quality.THz time-domain spectroscopy(THz-TDS)combined with an attenuated total reflection(ATR)accessory was used to collect THz absorbance spectra from 48 peanut samples.Taking the quantitative prediction model of peanut moisture content based on THz-ATR as an example,wavelet transform(WT),empirical mode decomposition(EMD),local mean decomposition(LMD),and its improved methods-segmented local mean decomposition(SLMD)and piecewise mirror extension local mean decomposition(PME-LMD)-were employed for spectral denoising.The applicability of different denoising methods was evaluated using a support vector regression(SVR)model.Experimental results show that the peanut moisture content prediction model constructed after PME-LMD denoising achieved the best performance,with a root mean square error(RMSE),coefficient of determination(R^(2)),and mean absolute percentage error(MAPE)of 0.010,0.912,and 0.040,respectively.Compared with traditional methods,PME-LMD significantly improved spectral quality and model prediction performance.The PME-LMD denoising strategy proposed in this study effectively suppresses non-uniform noise interference in THz spectral signals,providing an efficient and accurate preprocessing method for THz spectral analysis of agricultural products.This research provides theoretical support and technical guidance for the application of THz technology for detecting agricultural product quality.展开更多
The growing complexity of cyber threats requires innovative machine learning techniques,and image-based malware classification opens up new possibilities.Meanwhile,existing research has largely overlooked the impact o...The growing complexity of cyber threats requires innovative machine learning techniques,and image-based malware classification opens up new possibilities.Meanwhile,existing research has largely overlooked the impact of noise and obfuscation techniques commonly employed by malware authors to evade detection,and there is a critical gap in using noise simulation as a means of replicating real-world malware obfuscation techniques and adopting denoising framework to counteract these challenges.This study introduces an image denoising technique based on a U-Net combined with a GAN framework to address noise interference and obfuscation challenges in image-based malware analysis.The proposed methodology addresses existing classification limitations by introducing noise addition,which simulates obfuscated malware,and denoising strategies to restore robust image representations.To evaluate the approach,we used multiple CNN-based classifiers to assess noise resistance across architectures and datasets,measuring significant performance variation.Our denoising technique demonstrates remarkable performance improvements across two multi-class public datasets,MALIMG and BIG-15.For example,the MALIMG classification accuracy improved from 23.73%to 88.84%with denoising applied after Gaussian noise injection,demonstrating robustness.This approach contributes to improving malware detection by offering a robust framework for noise-resilient classification in noisy conditions.展开更多
In wireless communication scenarios,especially in low-bandwidth or noisy transmission conditions,image data is often degraded by interference during acquisition or transmission.To address this,we proposed Wasserstein ...In wireless communication scenarios,especially in low-bandwidth or noisy transmission conditions,image data is often degraded by interference during acquisition or transmission.To address this,we proposed Wasserstein frequency generative adversarial networks(WF-GAN),a frequency-aware denoising model based on wavelet transformation.By decomposing images into four frequency sub-bands,the model separates low-frequency contour information from high-frequency texture details.Contour guidance is applied to preserve structural integrity,while adversarial training enhances texture fidelity in the high-frequency bands.A joint loss function,incorporating frequency-domain loss and perceptual loss,is designed to reduce detail degradation during denoising.Experiments on public image datasets,with Gaussian noise applied to simulate wireless communication interference,demonstrate that WF-GAN consistently outperforms both traditional and deep learning-based denoising methods in terms of visual quality and quantitative metrics.These results highlight its potential for robust image processing in wireless communication systems.展开更多
Terahertz imaging technology has great potential applications in areas,such as remote sensing,navigation,security checks,and so on.However,terahertz images usually have the problems of heavy noises and low resolution....Terahertz imaging technology has great potential applications in areas,such as remote sensing,navigation,security checks,and so on.However,terahertz images usually have the problems of heavy noises and low resolution.Previous terahertz image denoising methods are mainly based on traditional image processing methods,which have limited denoising effects on the terahertz noise.Existing deep learning-based image denoising methods are mostly used in natural images and easily cause a large amount of detail loss when denoising terahertz images.Here,a residual-learning-based multiscale hybridconvolution residual network(MHRNet)is proposed for terahertz image denoising,which can remove noises while preserving detail features in terahertz images.Specifically,a multiscale hybrid-convolution residual block(MHRB)is designed to extract rich detail features and local prediction residual noise from terahertz images.Specifically,MHRB is a residual structure composed of a multiscale dilated convolution block,a bottleneck layer,and a multiscale convolution block.MHRNet uses the MHRB and global residual learning to achieve terahertz image denoising.Ablation studies are performed to validate the effectiveness of MHRB.A series of experiments are conducted on the public terahertz image datasets.The experimental results demonstrate that MHRNet has an excellent denoising effect on synthetic and real noisy terahertz images.Compared with existing methods,MHRNet achieves comprehensive competitive results.展开更多
To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions...To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions in the form of noise and artifacts,are kept to a bare minimum.The unexpected change realized during the acquisition process specifically attacks the integrity of the image’s quality,while indirectly attacking the effectiveness of the diagnostic process.It is thus crucial that this is attended to with maximum efficiency at the level of pertinent expertise.The solution to these challenges presents a complex dilemma at the acquisition stage,where image processing techniques must be adopted.The necessity of this mandatory image pre-processing step underpins the implementation of traditional state-of-the-art methods to create functional and robust denoising or recovery devices.This article hereby provides an extensive systematic review of the above techniques,with the purpose of presenting a systematic evaluation of their effect on medical images under three different distributions of noise,i.e.,Gaussian,Poisson,and Rician.A thorough analysis of these methods is conducted using eight evaluation parameters to highlight the unique features of each method.The covered denoising methods are essential in actual clinical scenarios where the preservation of anatomical details is crucial for accurate and safe diagnosis,such as tumor detection in MRI and vascular imaging in CT.展开更多
Imaging sonar devices generate sonar images by receiving echoes from objects,which are often accompanied by severe speckle noise,resulting in image distortion and information loss.Common optical denoising methods do n...Imaging sonar devices generate sonar images by receiving echoes from objects,which are often accompanied by severe speckle noise,resulting in image distortion and information loss.Common optical denoising methods do not work well in removing speckle noise from sonar images and may even reduce their visual quality.To address this issue,a sonar image denoising method based on fuzzy clustering and the undecimated dual-tree complex wavelet transform is proposed.This method provides a perfect translation invariance and an improved directional selectivity during image decomposition,leading to richer representation of noise and edges in high frequency coefficients.Fuzzy clustering can separate noise from useful information according to the amplitude characteristics of speckle noise,preserving the latter and achieving the goal of noise removal.Additionally,the low frequency coefficients are smoothed using bilateral filtering to improve the visual quality of the image.To verify the effectiveness of the algorithm,multiple groups of ablation experiments were conducted,and speckle sonar images with different variances were evaluated and compared with existing speckle removal methods in the transform domain.The experimental results show that the proposed method can effectively improve image quality,especially in cases of severe noise,where it still achieves a good denoising performance.展开更多
In modern industrial design trends featuring with integration,miniaturization,and versatility,there is a growing demand on the utilization of microstructural array devices.The measurement of such microstructural array...In modern industrial design trends featuring with integration,miniaturization,and versatility,there is a growing demand on the utilization of microstructural array devices.The measurement of such microstructural array components often encounters challenges due to the reduced scale and complex structures,either by contact or noncontact optical approaches.Among these microstructural arrays,there are still no optical measurement methods for micro corner-cube reflector arrays.To solve this problem,this study introduces a method for effectively eliminating coherent noise and achieving surface profile reconstruction in interference measurements of microstructural arrays.The proposed denoising method allows the calibration and inverse solving of system errors in the frequency domain by employing standard components with known surface types.This enables the effective compensation of the complex amplitude of non-sample coherent light within the interferometer optical path.The proposed surface reconstruction method enables the profile calculation within the situation that there is complex multi-reflection during the propagation of rays in microstructural arrays.Based on the measurement results,two novel metrics are defined to estimate diffraction errors at array junctions and comprehensive errors across multiple array elements,offering insights into other types of microstructure devices.This research not only addresses challenges of the coherent noise and multi-reflection,but also makes a breakthrough for quantitively optical interference measurement of microstructural array devices.展开更多
Fluorescence microscopy is indispensable in life science research,yet denoising remains challenging due to varied biological samples and imaging conditions.We introduce a wavelet-enhanced transformer based on DnCNN th...Fluorescence microscopy is indispensable in life science research,yet denoising remains challenging due to varied biological samples and imaging conditions.We introduce a wavelet-enhanced transformer based on DnCNN that fuses wavelet preprocessing with a dual-branch transformer-convolutional neural network(CNN)architecture.Wavelet decomposition separates highand low-frequency components for targeted noise reduction;the CNN branch restores local details,whereas the transformer branch captures global context;and an adaptive loss balances quantitative fidelity with perceptual quality.On the fluorescence microscopy denoising benchmark,our method surpasses leading CNNand transformer-based approaches,improving peak signal-to-noise ratio by 2.34%and 0.88%and structural similarity index measure by 0.53%and 1.07%,respectively.This framework offers enhanced generalization and practical gains for fluorescence image denoising.展开更多
Noise present in remote sensing data creates obstacles to proper land use and land cover(LULC)classification methods.Thepaper evaluates machine learning(ML)denoisingmethods that adapt Raman spectroscopy’s spectral te...Noise present in remote sensing data creates obstacles to proper land use and land cover(LULC)classification methods.Thepaper evaluates machine learning(ML)denoisingmethods that adapt Raman spectroscopy’s spectral techniques to optimise remote sensing spectra for land-use/land-cover(LULC)mapping.A basic Raman spectroscopy model demonstrates that Savitzky-Golay(SG)filtering,Wavelet denoising,and basic 1D Convolutional Autoencoder have different effects on synthetic spectral features relevant to LULCclassification.Savitzky-Golay filtering yielded the most efficient results,increasing classification accuracy from 0.71(noisy)to 1.00(denoised),resulting in perfect classification with zero errors and enhancing the Precision-Recall curve,as Area Under the Precision-Recall Curve(AUC-PR)transformed from 0.84 to 1.00.The study examined wavelet denoising in conjunction with a 1D Convolutional Autoencoder,assessing the noise reduction capability through visual evaluation.Based on Raman-based spectral analysis,a traditional method complemented with machine learning denoising provides promising fields for feature identification in remote sensing images,thereby improving the quality of LULC-related mapping outcomes.展开更多
This work elaborates an innovative mesh denoising approach that combines feature recovery and denoising in an alternating manner.It proposes a feature-driven variational model and introduces an iterative scheme that a...This work elaborates an innovative mesh denoising approach that combines feature recovery and denoising in an alternating manner.It proposes a feature-driven variational model and introduces an iterative scheme that alternates between feature recovery and the denoising process.The main idea is to estimate feature candidates,filter noisy face normals in the smooth(non-feature)domain,and utilize erosion and dilation operators on the feature candidates.By imposing connectivity constraints on normal vectors with large amplitude variations,the proposed scheme effectively removes noise and progressively recovers both sharp and small-scale features during the iterative process.To validate its effectiveness,this work conducts extensive numerical experiments on both simulated and real-scanned data.The results demonstrate significant improvements in noise reduction and feature preservation compared to existing methods.展开更多
Low-field nuclear magnetic resonance(NMR)has broad application prospects in the explo-ration and development of unconventional oil and gas reservoirs.However,NMR instruments tend to acquire echo signals with relativel...Low-field nuclear magnetic resonance(NMR)has broad application prospects in the explo-ration and development of unconventional oil and gas reservoirs.However,NMR instruments tend to acquire echo signals with relatively low signal-to-noise ratio(SNR),resulting in poor accuracy of T2 spectrum inversion.It is crucial to preprocess the low SNR data with denoising methods before inversion.In this paper,a hybrid NMR data denoising method combining empirical mode decomposition-singular value decomposition(EMD-SVD)was proposed.Firstly,the echo data were decomposed with the EMD method to low-and high-frequency intrinsic mode function(IMF)components as well as a residual.Next,the SVD method was employed for the high-frequency IMF components denoising.Finally,the low-frequency IMF components,the denoised high-frequency IMF components,and the residual are summed to form the denoised signal.To validate the effectiveness and feasibility of the EMD-SVDmethod,numerical simulations,experimental data,and NMR log data processingwere conducted.The results indicate that the inverted NMR spectra with the EMD-SVD denoising method exhibit higher quality compared to the EMD method and the SVD method.展开更多
Geochemical survey data are essential across Earth Science disciplines but are often affected by noise,which can obscure important geological signals and compromise subsequent prediction and interpretation.Quantifying...Geochemical survey data are essential across Earth Science disciplines but are often affected by noise,which can obscure important geological signals and compromise subsequent prediction and interpretation.Quantifying prediction uncertainty is hence crucial for robust geoscientific decision-making.This study proposes a novel deep learning framework,the Spatially Constrained Variational Autoencoder(SC-VAE),for denoising geochemical survey data with integrated uncertainty quantification.The SC-VAE incorporates spatial regularization,which enforces spatial coherence by modeling inter-sample relationships directly within the latent space.The performance of the SC-VAE was systematically evaluated against a standard Variational Autoencoder(VAE)using geochemical data from the gold polymetallic district in the northwestern part of Sichuan Province,China.Both models were optimized using Bayesian optimization,with objective functions specifically designed to maintain essential geostatistical characteristics.Evaluation metrics include variogram analysis,quantitative measures of spatial interpolation accuracy,visual assessment of denoised maps,and statistical analysis of data distributions,as well as decomposition of uncertainties.Results show that the SC-VAE achieves superior noise suppression and better preservation of spatial structure compared to the standard VAE,as demonstrated by a significant reduction in the variogram nugget effect and an increased partial sill.The SC-VAE produces denoised maps with clearer anomaly delineation and more regularized data distributions,effectively mitigating outliers and reducing kurtosis.Additionally,it delivers improved interpolation accuracy and spatially explicit uncertainty estimates,facilitating more reliable and interpretable assessments of prediction confidence.The SC-VAE framework thus provides a robust,geostatistically informed solution for enhancing the quality and interpretability of geochemical data,with broad applicability in mineral exploration,environmental geochemistry,and other Earth Science domains.展开更多
To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective clu...To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective cluster centers,a combination of density-based spatial clustering of applications with noise(DBSCAN)and Kmeans++is utilized.Subsequently,long short-term memory(LSTM)is employed to fit and yield optimized cluster centers with temporal information.Lastly,based on the new cluster centers and denoising ratio,a radius threshold is set,and noise points beyond this threshold are removed.The comprehensive denoising metrics F1_score of CBTDNN have achieved 0.8931,0.7735,and 0.9215 on the traffic sequences dataset,pedestrian detection dataset,and turntable dataset,respectively.And these metrics demonstrate improvements of 49.90%,33.07%,19.31%,and 22.97%compared to four contrastive algorithms,namely nearest neighbor(NNb),nearest neighbor with polarity(NNp),Autoencoder,and multilayer perceptron denoising filter(MLPF).These results demonstrate that the proposed method enhances the denoising performance of event-based sensors.展开更多
文摘The initial noise present in the depth images obtained with RGB-D sensors is a combination of hardware limitations in addition to the environmental factors,due to the limited capabilities of sensors,which also produce poor computer vision results.The common image denoising techniques tend to remove significant image details and also remove noise,provided they are based on space and frequency filtering.The updated framework presented in this paper is a novel denoising model that makes use of Boruta-driven feature selection using a Long Short-Term Memory Autoencoder(LSTMAE).The Boruta algorithm identifies the most useful depth features that are used to maximize the spatial structure integrity and reduce redundancy.An LSTMAE is then used to process these selected features and model depth pixel sequences to generate robust,noise-resistant representations.The system uses the encoder to encode the input data into a latent space that has been compressed before it is decoded to retrieve the clean image.Experiments on a benchmark data set show that the suggested technique attains a PSNR of 45 dB and an SSIM of 0.90,which is 10 dB higher than the performance of conventional convolutional autoencoders and 15 times higher than that of the wavelet-based models.Moreover,the feature selection step will decrease the input dimensionality by 40%,resulting in a 37.5%reduction in training time and a real-time inference rate of 200 FPS.Boruta-LSTMAE framework,therefore,offers a highly efficient and scalable system for depth image denoising,with a high potential to be applied to close-range 3D systems,such as robotic manipulation and gesture-based interfaces.
基金Projects(52374138,51764013)supported by the National Natural Science Foundation of ChinaProject(20204BCJ22005)supported by the Training Plan for Academic and Technical Leaders of Major Disciplines of Jiangxi Province,China+1 种基金Project(2019M652277)supported by the China Postdoctoral Science FoundationProject(20192ACBL21014)supported by the Natural Science Youth Foundation Key Projects of Jiangxi Province,China。
文摘The cemented tailings backfill(CTB)with initial defects is more prone to destabilization damage under the influence of various unfavorable factors during the mining process.In order to investigate its influence on the stability of underground mining engineering,this paper simulates the generation of different degrees of initial defects inside the CTB by adding different contents of air-entraining agent(AEA),investigates the acoustic emission RA/AF eigenvalues of CTB with different contents of AEA under uniaxial compression,and adopts various denoising algorithms(e.g.,moving average smoothing,median filtering,and outlier detection)to improve the accuracy of the data.The variance and autocorrelation coefficients of RA/AF parameters were analyzed in conjunction with the critical slowing down(CSD)theory.The results show that the acoustic emission RA/AF values can be used to characterize the progressive damage evolution of CTB.The denoising algorithm processed the AE signals to reduce the effects of extraneous noise and anomalous spikes.Changes in the variance curves provide clear precursor information,while abrupt changes in the autocorrelation coefficient can be used as an auxiliary localization warning signal.The phenomenon of dramatic increase in the variance and autocorrelation coefficient curves during the compression-tightening stage,which is influenced by the initial defects,can lead to false warnings.As the initial defects of the CTB increase,its instability precursor time and instability time are prolonged,the peak stress decreases,and the time difference between the CTB and the instability damage is smaller.The results provide a new method for real-time monitoring and early warning of CTB instability damage.
文摘The internal flow fields within a three-dimensional inward-tunning combined inlet are extremely complex,especially during the engine mode transition,where the tunnel changes may impact the flow fields significantly.To develop an efficient flow field reconstruction model for this,we present an Improved Conditional Denoising Diffusion Generative Adversarial Network(ICDDGAN),which integrates Conditional Denoising Diffusion Probabilistic Models(CDDPMs)with Style GAN,and introduce a reconstruction discrimination mechanism and dynamic loss weight learning strategy.We establish the Mach number flow field dataset by numerical simulation at various backpressures for the mode transition process from turbine mode to ejector ramjet mode at Mach number 2.5.The proposed ICDDGAN model,given only sparse parameter information,can rapidly generate high-quality Mach number flow fields without a large number of samples for training.The results show that ICDDGAN is superior to CDDGAN in terms of training convergence and stability.Moreover,the interpolation and extrapolation test results during backpressure conditions show that ICDDGAN can accurately and quickly reconstruct Mach number fields at various tunnel slice shapes,with a Structural Similarity Index Measure(SSIM)of over 0.96 and a Mean-Square Error(MSE)of 0.035%to actual flow fields,reducing time costs by 7-8 orders of magnitude compared to Computational Fluid Dynamics(CFD)calculations.This can provide an efficient means for rapid computation of complex flow fields.
基金supported by the National Natural Science(No.U19A2063)the Jilin Provincial Development Program of Science and Technology (No.20230201080GX)the Jilin Province Education Department Scientific Research Project (No.JJKH20230851KJ)。
文摘The visual noise of each light intensity area is different when the image is drawn by Monte Carlo method.However,the existing denoising algorithms have limited denoising performance under complex lighting conditions and are easy to lose detailed information.So we propose a rendered image denoising method with filtering guided by lighting information.First,we design an image segmentation algorithm based on lighting information to segment the image into different illumination areas.Then,we establish the parameter prediction model guided by lighting information for filtering(PGLF)to predict the filtering parameters of different illumination areas.For different illumination areas,we use these filtering parameters to construct area filters,and the filters are guided by the lighting information to perform sub-area filtering.Finally,the filtering results are fused with auxiliary features to output denoised images for improving the overall denoising effect of the image.Under the physically based rendering tool(PBRT)scene and Tungsten dataset,the experimental results show that compared with other guided filtering denoising methods,our method improves the peak signal-to-noise ratio(PSNR)metrics by 4.2164 dB on average and the structural similarity index(SSIM)metrics by 7.8%on average.This shows that our method can better reduce the noise in complex lighting scenesand improvethe imagequality.
基金the Research Grant of Kwangwoon University in 2024.
文摘Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.
基金supported by the National Natural Science Foundation of China(Nos.61906168,62202429 and 62272267)the Zhejiang Provincial Natural Science Foundation of China(No.LY23F020023)the Construction of Hubei Provincial Key Laboratory for Intelligent Visual Monitoring of Hydropower Projects(No.2022SDSJ01)。
文摘Accurately identifying building distribution from remote sensing images with complex background information is challenging.The emergence of diffusion models has prompted the innovative idea of employing the reverse denoising process to distill building distribution from these complex backgrounds.Building on this concept,we propose a novel framework,building extraction diffusion model(BEDiff),which meticulously refines the extraction of building footprints from remote sensing images in a stepwise fashion.Our approach begins with the design of booster guidance,a mechanism that extracts structural and semantic features from remote sensing images to serve as priors,thereby providing targeted guidance for the diffusion process.Additionally,we introduce a cross-feature fusion module(CFM)that bridges the semantic gap between different types of features,facilitating the integration of the attributes extracted by booster guidance into the diffusion process more effectively.Our proposed BEDiff marks the first application of diffusion models to the task of building extraction.Empirical evidence from extensive experiments on the Beijing building dataset demonstrates the superior performance of BEDiff,affirming its effectiveness and potential for enhancing the accuracy of building extraction in complex urban landscapes.
文摘In the field of image processing,the analysis of Synthetic Aperture Radar(SAR)images is crucial due to its broad range of applications.However,SAR images are often affected by coherent speckle noise,which significantly degrades image quality.Traditional denoising methods,typically based on filter techniques,often face challenges related to inefficiency and limited adaptability.To address these limitations,this study proposes a novel SAR image denoising algorithm based on an enhanced residual network architecture,with the objective of enhancing the utility of SAR imagery in complex electromagnetic environments.The proposed algorithm integrates residual network modules,which directly process the noisy input images to generate denoised outputs.This approach not only reduces computational complexity but also mitigates the difficulties associated with model training.By combining the Transformer module with the residual block,the algorithm enhances the network's ability to extract global features,offering superior feature extraction capabilities compared to CNN-based residual modules.Additionally,the algorithm employs the adaptive activation function Meta-ACON,which dynamically adjusts the activation patterns of neurons,thereby improving the network's feature extraction efficiency.The effectiveness of the proposed denoising method is empirically validated using real SAR images from the RSOD dataset.The proposed algorithm exhibits remarkable performance in terms of EPI,SSIM,and ENL,while achieving a substantial enhancement in PSNR when compared to traditional and deep learning-based algorithms.The PSNR performance is enhanced by over twofold.Moreover,the evaluation of the MSTAR SAR dataset substantiates the algorithm's robustness and applicability in SAR denoising tasks,with a PSNR of 25.2021 being attained.These findings underscore the efficacy of the proposed algorithm in mitigating speckle noise while preserving critical features in SAR imagery,thereby enhancing its quality and usability in practical scenarios.
基金Supported by the National Key R&D Program of China(2023YFD2101001)National Natural Science Foundation of China(32202144,61807001)。
文摘To address the issues of peak overlap caused by complex matrices in agricultural product terahertz(THz)spectral signals and the dynamic,nonlinear interference induced by environmental and system noise,this study explores the feasibility of adaptive-signal-decomposition-based denoising methods to improve THz spectral quality.THz time-domain spectroscopy(THz-TDS)combined with an attenuated total reflection(ATR)accessory was used to collect THz absorbance spectra from 48 peanut samples.Taking the quantitative prediction model of peanut moisture content based on THz-ATR as an example,wavelet transform(WT),empirical mode decomposition(EMD),local mean decomposition(LMD),and its improved methods-segmented local mean decomposition(SLMD)and piecewise mirror extension local mean decomposition(PME-LMD)-were employed for spectral denoising.The applicability of different denoising methods was evaluated using a support vector regression(SVR)model.Experimental results show that the peanut moisture content prediction model constructed after PME-LMD denoising achieved the best performance,with a root mean square error(RMSE),coefficient of determination(R^(2)),and mean absolute percentage error(MAPE)of 0.010,0.912,and 0.040,respectively.Compared with traditional methods,PME-LMD significantly improved spectral quality and model prediction performance.The PME-LMD denoising strategy proposed in this study effectively suppresses non-uniform noise interference in THz spectral signals,providing an efficient and accurate preprocessing method for THz spectral analysis of agricultural products.This research provides theoretical support and technical guidance for the application of THz technology for detecting agricultural product quality.
文摘The growing complexity of cyber threats requires innovative machine learning techniques,and image-based malware classification opens up new possibilities.Meanwhile,existing research has largely overlooked the impact of noise and obfuscation techniques commonly employed by malware authors to evade detection,and there is a critical gap in using noise simulation as a means of replicating real-world malware obfuscation techniques and adopting denoising framework to counteract these challenges.This study introduces an image denoising technique based on a U-Net combined with a GAN framework to address noise interference and obfuscation challenges in image-based malware analysis.The proposed methodology addresses existing classification limitations by introducing noise addition,which simulates obfuscated malware,and denoising strategies to restore robust image representations.To evaluate the approach,we used multiple CNN-based classifiers to assess noise resistance across architectures and datasets,measuring significant performance variation.Our denoising technique demonstrates remarkable performance improvements across two multi-class public datasets,MALIMG and BIG-15.For example,the MALIMG classification accuracy improved from 23.73%to 88.84%with denoising applied after Gaussian noise injection,demonstrating robustness.This approach contributes to improving malware detection by offering a robust framework for noise-resilient classification in noisy conditions.
基金supported in part by the Beijing Natural Science Foundation(No.4254072).
文摘In wireless communication scenarios,especially in low-bandwidth or noisy transmission conditions,image data is often degraded by interference during acquisition or transmission.To address this,we proposed Wasserstein frequency generative adversarial networks(WF-GAN),a frequency-aware denoising model based on wavelet transformation.By decomposing images into four frequency sub-bands,the model separates low-frequency contour information from high-frequency texture details.Contour guidance is applied to preserve structural integrity,while adversarial training enhances texture fidelity in the high-frequency bands.A joint loss function,incorporating frequency-domain loss and perceptual loss,is designed to reduce detail degradation during denoising.Experiments on public image datasets,with Gaussian noise applied to simulate wireless communication interference,demonstrate that WF-GAN consistently outperforms both traditional and deep learning-based denoising methods in terms of visual quality and quantitative metrics.These results highlight its potential for robust image processing in wireless communication systems.
基金National Natural Science Foundation of China,Grant/Award Number:62173098,62104047Guangdong Provincial Key Laboratory of Cyber-Physical System,Grant/Award Number:2020B1212060069。
文摘Terahertz imaging technology has great potential applications in areas,such as remote sensing,navigation,security checks,and so on.However,terahertz images usually have the problems of heavy noises and low resolution.Previous terahertz image denoising methods are mainly based on traditional image processing methods,which have limited denoising effects on the terahertz noise.Existing deep learning-based image denoising methods are mostly used in natural images and easily cause a large amount of detail loss when denoising terahertz images.Here,a residual-learning-based multiscale hybridconvolution residual network(MHRNet)is proposed for terahertz image denoising,which can remove noises while preserving detail features in terahertz images.Specifically,a multiscale hybrid-convolution residual block(MHRB)is designed to extract rich detail features and local prediction residual noise from terahertz images.Specifically,MHRB is a residual structure composed of a multiscale dilated convolution block,a bottleneck layer,and a multiscale convolution block.MHRNet uses the MHRB and global residual learning to achieve terahertz image denoising.Ablation studies are performed to validate the effectiveness of MHRB.A series of experiments are conducted on the public terahertz image datasets.The experimental results demonstrate that MHRNet has an excellent denoising effect on synthetic and real noisy terahertz images.Compared with existing methods,MHRNet achieves comprehensive competitive results.
文摘To enable proper diagnosis of a patient,medical images must demonstrate no presence of noise and artifacts.The major hurdle lies in acquiring these images in such a manner that extraneous variables,causing distortions in the form of noise and artifacts,are kept to a bare minimum.The unexpected change realized during the acquisition process specifically attacks the integrity of the image’s quality,while indirectly attacking the effectiveness of the diagnostic process.It is thus crucial that this is attended to with maximum efficiency at the level of pertinent expertise.The solution to these challenges presents a complex dilemma at the acquisition stage,where image processing techniques must be adopted.The necessity of this mandatory image pre-processing step underpins the implementation of traditional state-of-the-art methods to create functional and robust denoising or recovery devices.This article hereby provides an extensive systematic review of the above techniques,with the purpose of presenting a systematic evaluation of their effect on medical images under three different distributions of noise,i.e.,Gaussian,Poisson,and Rician.A thorough analysis of these methods is conducted using eight evaluation parameters to highlight the unique features of each method.The covered denoising methods are essential in actual clinical scenarios where the preservation of anatomical details is crucial for accurate and safe diagnosis,such as tumor detection in MRI and vascular imaging in CT.
基金the National Natural Science Foundation of China(No.62065001)the Yunnan Young and Middle-aged Academic and Technical Leaders Reserve Talent Project(No.202205AC160001)+1 种基金the Science and Technology Programs of Yunnan Provincial Science and Technology Department(No.202101BA070001-054)the Special Basic Cooperative Research Programs of Yunnan Provincial Undergraduate Universities Association(No.2019FH001(-066))。
文摘Imaging sonar devices generate sonar images by receiving echoes from objects,which are often accompanied by severe speckle noise,resulting in image distortion and information loss.Common optical denoising methods do not work well in removing speckle noise from sonar images and may even reduce their visual quality.To address this issue,a sonar image denoising method based on fuzzy clustering and the undecimated dual-tree complex wavelet transform is proposed.This method provides a perfect translation invariance and an improved directional selectivity during image decomposition,leading to richer representation of noise and edges in high frequency coefficients.Fuzzy clustering can separate noise from useful information according to the amplitude characteristics of speckle noise,preserving the latter and achieving the goal of noise removal.Additionally,the low frequency coefficients are smoothed using bilateral filtering to improve the visual quality of the image.To verify the effectiveness of the algorithm,multiple groups of ablation experiments were conducted,and speckle sonar images with different variances were evaluated and compared with existing speckle removal methods in the transform domain.The experimental results show that the proposed method can effectively improve image quality,especially in cases of severe noise,where it still achieves a good denoising performance.
基金Supported by National Natural Science Foundation of China(Grant Nos.52375414,52075100)Shanghai Science and Technology Committee Innovation Grant of China(Grant No.23ZR1404200).
文摘In modern industrial design trends featuring with integration,miniaturization,and versatility,there is a growing demand on the utilization of microstructural array devices.The measurement of such microstructural array components often encounters challenges due to the reduced scale and complex structures,either by contact or noncontact optical approaches.Among these microstructural arrays,there are still no optical measurement methods for micro corner-cube reflector arrays.To solve this problem,this study introduces a method for effectively eliminating coherent noise and achieving surface profile reconstruction in interference measurements of microstructural arrays.The proposed denoising method allows the calibration and inverse solving of system errors in the frequency domain by employing standard components with known surface types.This enables the effective compensation of the complex amplitude of non-sample coherent light within the interferometer optical path.The proposed surface reconstruction method enables the profile calculation within the situation that there is complex multi-reflection during the propagation of rays in microstructural arrays.Based on the measurement results,two novel metrics are defined to estimate diffraction errors at array junctions and comprehensive errors across multiple array elements,offering insights into other types of microstructure devices.This research not only addresses challenges of the coherent noise and multi-reflection,but also makes a breakthrough for quantitively optical interference measurement of microstructural array devices.
基金supported by the National Natural Science Foundation of China(Grant No.62275210)the National Leading Talent Program,the National Young Talent Program,the Key Research and Development Program of Shaanxi(Grant No.2024SF2-GJHX-25)+5 种基金the Scientific Research Program Funded by the Education Department of Shaanxi Provincial Government(Grant No.24JS016)the Xidian University Specially Funded Project for Interdisciplinary Exploration(Grant No.TZJHF202523)the Fundamental Research Funds for Central Universities(Grant No.YJSJ25014)the Guangdong Provincial General Colleges and Universities Young Innovative Talents Research Project(Grant No.2024KQNCX172)the Shenzhen Science and Technology Program(Grant No.GJHZ20210705141805015)the Key Research Areas Support Science and Technology Project of Shenzhen Institute of Information Technology(Grant No.SZIIT2024KJ056).
文摘Fluorescence microscopy is indispensable in life science research,yet denoising remains challenging due to varied biological samples and imaging conditions.We introduce a wavelet-enhanced transformer based on DnCNN that fuses wavelet preprocessing with a dual-branch transformer-convolutional neural network(CNN)architecture.Wavelet decomposition separates highand low-frequency components for targeted noise reduction;the CNN branch restores local details,whereas the transformer branch captures global context;and an adaptive loss balances quantitative fidelity with perceptual quality.On the fluorescence microscopy denoising benchmark,our method surpasses leading CNNand transformer-based approaches,improving peak signal-to-noise ratio by 2.34%and 0.88%and structural similarity index measure by 0.53%and 1.07%,respectively.This framework offers enhanced generalization and practical gains for fluorescence image denoising.
文摘Noise present in remote sensing data creates obstacles to proper land use and land cover(LULC)classification methods.Thepaper evaluates machine learning(ML)denoisingmethods that adapt Raman spectroscopy’s spectral techniques to optimise remote sensing spectra for land-use/land-cover(LULC)mapping.A basic Raman spectroscopy model demonstrates that Savitzky-Golay(SG)filtering,Wavelet denoising,and basic 1D Convolutional Autoencoder have different effects on synthetic spectral features relevant to LULCclassification.Savitzky-Golay filtering yielded the most efficient results,increasing classification accuracy from 0.71(noisy)to 1.00(denoised),resulting in perfect classification with zero errors and enhancing the Precision-Recall curve,as Area Under the Precision-Recall Curve(AUC-PR)transformed from 0.84 to 1.00.The study examined wavelet denoising in conjunction with a 1D Convolutional Autoencoder,assessing the noise reduction capability through visual evaluation.Based on Raman-based spectral analysis,a traditional method complemented with machine learning denoising provides promising fields for feature identification in remote sensing images,thereby improving the quality of LULC-related mapping outcomes.
基金supported in part by the National Natural Science Foundation of China(62476219,62206220,12271140,12326609)the Young Talent Fund of Association for Science and Technology in Shaanxi,China(20230140)+1 种基金the Chunhui Program of Ministry of Education of China(HZKY20220537)the Fundamental Funds for the Central Universities(G2023KY0601).
文摘This work elaborates an innovative mesh denoising approach that combines feature recovery and denoising in an alternating manner.It proposes a feature-driven variational model and introduces an iterative scheme that alternates between feature recovery and the denoising process.The main idea is to estimate feature candidates,filter noisy face normals in the smooth(non-feature)domain,and utilize erosion and dilation operators on the feature candidates.By imposing connectivity constraints on normal vectors with large amplitude variations,the proposed scheme effectively removes noise and progressively recovers both sharp and small-scale features during the iterative process.To validate its effectiveness,this work conducts extensive numerical experiments on both simulated and real-scanned data.The results demonstrate significant improvements in noise reduction and feature preservation compared to existing methods.
基金supported by the National Natural Science Foundation of China(grant no.42304118)the Young Elite Scientist Sponsorship Program by BAST(grant no.BYESS2023027)the Science Foundation of China University of Petroleum,Beijing(grant no.2462022QNXZ001).
文摘Low-field nuclear magnetic resonance(NMR)has broad application prospects in the explo-ration and development of unconventional oil and gas reservoirs.However,NMR instruments tend to acquire echo signals with relatively low signal-to-noise ratio(SNR),resulting in poor accuracy of T2 spectrum inversion.It is crucial to preprocess the low SNR data with denoising methods before inversion.In this paper,a hybrid NMR data denoising method combining empirical mode decomposition-singular value decomposition(EMD-SVD)was proposed.Firstly,the echo data were decomposed with the EMD method to low-and high-frequency intrinsic mode function(IMF)components as well as a residual.Next,the SVD method was employed for the high-frequency IMF components denoising.Finally,the low-frequency IMF components,the denoised high-frequency IMF components,and the residual are summed to form the denoised signal.To validate the effectiveness and feasibility of the EMD-SVDmethod,numerical simulations,experimental data,and NMR log data processingwere conducted.The results indicate that the inverted NMR spectra with the EMD-SVD denoising method exhibit higher quality compared to the EMD method and the SVD method.
基金supported by the National Natural Science Foundation of China(Nos.42530801,42425208)the Natural Science Foundation of Hubei Province(China)(No.2023AFA001)+1 种基金the MOST Special Fund from State Key Laboratory of Geological Processes and Mineral Resources,China University of Geosciences(No.MSFGPMR2025-401)the China Scholarship Council(No.202306410181)。
文摘Geochemical survey data are essential across Earth Science disciplines but are often affected by noise,which can obscure important geological signals and compromise subsequent prediction and interpretation.Quantifying prediction uncertainty is hence crucial for robust geoscientific decision-making.This study proposes a novel deep learning framework,the Spatially Constrained Variational Autoencoder(SC-VAE),for denoising geochemical survey data with integrated uncertainty quantification.The SC-VAE incorporates spatial regularization,which enforces spatial coherence by modeling inter-sample relationships directly within the latent space.The performance of the SC-VAE was systematically evaluated against a standard Variational Autoencoder(VAE)using geochemical data from the gold polymetallic district in the northwestern part of Sichuan Province,China.Both models were optimized using Bayesian optimization,with objective functions specifically designed to maintain essential geostatistical characteristics.Evaluation metrics include variogram analysis,quantitative measures of spatial interpolation accuracy,visual assessment of denoised maps,and statistical analysis of data distributions,as well as decomposition of uncertainties.Results show that the SC-VAE achieves superior noise suppression and better preservation of spatial structure compared to the standard VAE,as demonstrated by a significant reduction in the variogram nugget effect and an increased partial sill.The SC-VAE produces denoised maps with clearer anomaly delineation and more regularized data distributions,effectively mitigating outliers and reducing kurtosis.Additionally,it delivers improved interpolation accuracy and spatially explicit uncertainty estimates,facilitating more reliable and interpretable assessments of prediction confidence.The SC-VAE framework thus provides a robust,geostatistically informed solution for enhancing the quality and interpretability of geochemical data,with broad applicability in mineral exploration,environmental geochemistry,and other Earth Science domains.
基金supported by the National Natural Science Foundation of China(No.62134004).
文摘To enhance the denoising performance of event-based sensors,we introduce a clustering-based temporal deep neural network denoising method(CBTDNN).Firstly,to cluster the sensor output data and obtain the respective cluster centers,a combination of density-based spatial clustering of applications with noise(DBSCAN)and Kmeans++is utilized.Subsequently,long short-term memory(LSTM)is employed to fit and yield optimized cluster centers with temporal information.Lastly,based on the new cluster centers and denoising ratio,a radius threshold is set,and noise points beyond this threshold are removed.The comprehensive denoising metrics F1_score of CBTDNN have achieved 0.8931,0.7735,and 0.9215 on the traffic sequences dataset,pedestrian detection dataset,and turntable dataset,respectively.And these metrics demonstrate improvements of 49.90%,33.07%,19.31%,and 22.97%compared to four contrastive algorithms,namely nearest neighbor(NNb),nearest neighbor with polarity(NNp),Autoencoder,and multilayer perceptron denoising filter(MLPF).These results demonstrate that the proposed method enhances the denoising performance of event-based sensors.