To eliminate distortion caused by vertical drift and illusory slopes in atomic force microscopy(AFM)imaging,a lifting-wavelet-based iterative thresholding correction method is proposed in this paper.This method achiev...To eliminate distortion caused by vertical drift and illusory slopes in atomic force microscopy(AFM)imaging,a lifting-wavelet-based iterative thresholding correction method is proposed in this paper.This method achieves high-quality AFM imaging via line-by-line corrections for each distorted profile along the fast axis.The key to this line-by-line correction is to accurately simulate the profile distortion of each scanning row.Therefore,a data preprocessing approach is first developed to roughly filter out most of the height data that impairs the accuracy of distortion modeling.This process is implemented through an internal double-screening mechanism.A line-fitting method is adopted to preliminarily screen out the obvious specimens.Lifting wavelet analysis is then carried out to identify the base parts that are mistakenly filtered out as specimens so as to preserve most of the base profiles and provide a good basis for further distortion modeling.Next,an iterative thresholding algorithm is developed to precisely simulate the profile distortion.By utilizing the roughly screened base profile,the optimal threshold,which is used to screen out the pure bases suitable for distortion modeling,is determined through iteration with a specified error rule.On this basis,the profile distortion is accurately modeled through line fitting on the finely screened base data,and the correction is implemented by subtracting the modeling result from the distorted profile.Finally,the effectiveness of the proposed method is verified through experiments and applications.展开更多
Data reconstruction is a crucial step in seismic data preprocessing.To improve reconstruction speed and save memory,the commonly used three-dimensional(3D)seismic data reconstruction method divides the missing data in...Data reconstruction is a crucial step in seismic data preprocessing.To improve reconstruction speed and save memory,the commonly used three-dimensional(3D)seismic data reconstruction method divides the missing data into a series of time slices and independently reconstructs each time slice.However,when this strategy is employed,the potential correlations between two adjacent time slices are ignored,which degrades reconstruction performance.Therefore,this study proposes the use of a two-dimensional curvelet transform and the fast iterative shrinkage thresholding algorithm for data reconstruction.Based on the significant overlapping characteristics between the curvelet coefficient support sets of two adjacent time slices,a weighted operator is constructed in the curvelet domain using the prior support set provided by the previous reconstructed time slice to delineate the main energy distribution range,eff ectively providing prior information for reconstructing adjacent slices.Consequently,the resulting weighted fast iterative shrinkage thresholding algorithm can be used to reconstruct 3D seismic data.The processing of synthetic and field data shows that the proposed method has higher reconstruction accuracy and faster computational speed than the conventional fast iterative shrinkage thresholding algorithm for handling missing 3D seismic data.展开更多
In this paper,we explore the use of iterative curvelet thresholding for seismic random noise attenuation.A new method for combining the curvelet transform with iterative thresholding to suppress random noise is demons...In this paper,we explore the use of iterative curvelet thresholding for seismic random noise attenuation.A new method for combining the curvelet transform with iterative thresholding to suppress random noise is demonstrated and the issue is described as a linear inverse optimal problem using the L1 norm.Random noise suppression in seismic data is transformed into an L1 norm optimization problem based on the curvelet sparsity transform. Compared to the conventional methods such as median filter algorithm,FX deconvolution, and wavelet thresholding,the results of synthetic and field data processing show that the iterative curvelet thresholding proposed in this paper can sufficiently improve signal to noise radio(SNR) and give higher signal fidelity at the same time.Furthermore,to make better use of the curvelet transform such as multiple scales and multiple directions,we control the curvelet direction of the result after iterative curvelet thresholding to further improve the SNR.展开更多
Seismic time-frequency(TF)transforms are essential tools in reservoir interpretation and signal processing,particularly for characterizing frequency variations in non-stationary seismic data.Recently,sparse TF trans-f...Seismic time-frequency(TF)transforms are essential tools in reservoir interpretation and signal processing,particularly for characterizing frequency variations in non-stationary seismic data.Recently,sparse TF trans-forms,which leverage sparse coding(SC),have gained significant attention in the geosciences due to their ability to achieve high TF resolution.However,the iterative approaches typically employed in sparse TF transforms are computationally intensive,making them impractical for real seismic data analysis.To address this issue,we propose an interpretable convolutional sparse coding(CSC)network to achieve high TF resolution.The proposed model is generated based on the traditional short-time Fourier transform(STFT)transform and a modified UNet,named ULISTANet.In this design,we replace the conventional convolutional layers of the UNet with learnable iterative shrinkage thresholding algorithm(LISTA)blocks,a specialized form of CSC.The LISTA block,which evolves from the traditional iterative shrinkage thresholding algorithm(ISTA),is optimized for extracting sparse features more effectively.Furthermore,we create a synthetic dataset featuring complex frequency-modulated signals to train ULISTANet.Finally,the proposed method’s performance is subsequently validated using both synthetic and field data,demonstrating its potential for enhanced seismic data analysis.展开更多
Matrix completion is the extension of compressed sensing.In compressed sensing,we solve the underdetermined equations using sparsity prior of the unknown signals.However,in matrix completion,we solve the underdetermin...Matrix completion is the extension of compressed sensing.In compressed sensing,we solve the underdetermined equations using sparsity prior of the unknown signals.However,in matrix completion,we solve the underdetermined equations based on sparsity prior in singular values set of the unknown matrix,which also calls low-rank prior of the unknown matrix.This paper firstly introduces basic concept of matrix completion,analyses the matrix suitably used in matrix completion,and shows that such matrix should satisfy two conditions:low rank and incoherence property.Then the paper provides three reconstruction algorithms commonly used in matrix completion:singular value thresholding algorithm,singular value projection,and atomic decomposition for minimum rank approximation,puts forward their shortcoming to know the rank of original matrix.The Projected Gradient Descent based on Soft Thresholding(STPGD),proposed in this paper predicts the rank of unknown matrix using soft thresholding,and iteratives based on projected gradient descent,thus it could estimate the rank of unknown matrix exactly with low computational complexity,this is verified by numerical experiments.We also analyze the convergence and computational complexity of the STPGD algorithm,point out this algorithm is guaranteed to converge,and analyse the number of iterations needed to reach reconstruction error.Compared the computational complexity of the STPGD algorithm to other algorithms,we draw the conclusion that the STPGD algorithm not only reduces the computational complexity,but also improves the precision of the reconstruction solution.展开更多
This paper proposes an image segmentation method based on the combination of the wavelet multi-scale edge detection and the entropy iterative threshold selection.Image for segmentation is divided into two parts by hig...This paper proposes an image segmentation method based on the combination of the wavelet multi-scale edge detection and the entropy iterative threshold selection.Image for segmentation is divided into two parts by high- and low-frequency.In the high-frequency part the wavelet multiscale was used for the edge detection,and the low-frequency part conducted on segmentation using the entropy iterative threshold selection method.Through the consideration of the image edge and region,a CT image of the thorax was chosen to test the proposed method for the segmentation of the lungs.Experimental results show that the method is efficient to segment the interesting region of an image compared with conventional methods.展开更多
Today the error correcting codes are present in all the telecom standards, in particular the low density parity check (LDPC) codes. The choice of a good code for a given network is essentially linked to the decoding p...Today the error correcting codes are present in all the telecom standards, in particular the low density parity check (LDPC) codes. The choice of a good code for a given network is essentially linked to the decoding performance obtained by the bit error rate (BER) curves. This approach requires a significant simulation time proportional to the length of the code, to overcome this problem Exit chart was introduced, as a fast technique to predict the performance of a particular class of codes called Turbo codes. In this paper, we success to apply Exit chart to analyze convergence behavior of iterative threshold decoding of one step majority logic decodable (OSMLD) codes. The iterative decoding process uses a soft-input soft-output threshold decoding algorithm as component decoder. Simulation results for iterative decoding of simple and concatenated codes transmitted over a Gaussian channel have shown that the thresholds obtained are a good indicator of the Bit Error Rate (BER) curves.展开更多
To address the issues of low accuracy and high false positive rate in traditional Otsu algorithm for defect detection on infrared images of wind turbine blades(WTB),this paper proposes a technique that combines morpho...To address the issues of low accuracy and high false positive rate in traditional Otsu algorithm for defect detection on infrared images of wind turbine blades(WTB),this paper proposes a technique that combines morphological image enhancement with an improved Otsu algorithm.First,mathematical morphology’s differential multi-scale white and black top-hat operations are applied to enhance the image.The algorithm employs entropy as the objective function to guide the iteration process of image enhancement,selecting appropriate structural element scales to execute differential multi-scale white and black top-hat transformations,effectively enhancing the detail features of defect regions and improving the contrast between defects and background.Afterwards,grayscale inversion is performed on the enhanced infrared defect image to better adapt to the improved Otsu algorithm.Finally,by introducing a parameter K to adjust the calculation of inter-class variance in the Otsu method,the weight of the target pixels is increased.Combined with the adaptive iterative threshold algorithm,the threshold selection process is further fine-tuned.Experimental results show that compared to traditional Otsu algorithms and other improvements,the proposed method has significant advantages in terms of defect detection accuracy and reducing false positive rates.The average defect detection rate approaches 1,and the average Hausdorff distance decreases to 0.825,indicating strong robustness and accuracy of the method.展开更多
In this paper,we propose a variable metric extrapolation proximal iterative hard thresholding(VMEPIHT)method for nonconvex\ell_0-norm sparsity regularization problem which has wide applications in signal and image pro...In this paper,we propose a variable metric extrapolation proximal iterative hard thresholding(VMEPIHT)method for nonconvex\ell_0-norm sparsity regularization problem which has wide applications in signal and image processing,machine learning and so on.The VMEPIHT method is based on the forward-backward splitting(FBS)method,and variable metric strategy is employed in the extrapolation step to speed up the algorithm.The proposed method’s convergence,linear convergence rate and superlinear convergence rate are shown under appropriate assumptions.Finally,we conduct numerical experiments on compressed sensing problem and CT image reconstruction problem to confirm the efficiency of the proposed method,compared with other state-of-the-art methods.展开更多
Missing data are a problem in geophysical surveys, and interpolation and reconstruction of missing data is part of the data processing and interpretation. Based on the sparseness of the geophysical data or the transfo...Missing data are a problem in geophysical surveys, and interpolation and reconstruction of missing data is part of the data processing and interpretation. Based on the sparseness of the geophysical data or the transform domain, we can improve the accuracy and stability of the reconstruction by transforming it to a sparse optimization problem. In this paper, we propose a mathematical model for the sparse reconstruction of data based on the LO-norm minimization. Furthermore, we discuss two types of the approximation algorithm for the LO- norm minimization according to the size and characteristics of the geophysical data: namely, the iteratively reweighted least-squares algorithm and the fast iterative hard thresholding algorithm. Theoretical and numerical analysis showed that applying the iteratively reweighted least-squares algorithm to the reconstruction of potential field data exploits its fast convergence rate, short calculation time, and high precision, whereas the fast iterative hard thresholding algorithm is more suitable for processing seismic data, moreover, its computational efficiency is better than that of the traditional iterative hard thresholding algorithm.展开更多
In the medical computer tomography (CT) field, total variation (TV), which is the l1-norm of the discrete gradient transform (DGT), is widely used as regularization based on the compressive sensing (CS) theory...In the medical computer tomography (CT) field, total variation (TV), which is the l1-norm of the discrete gradient transform (DGT), is widely used as regularization based on the compressive sensing (CS) theory. To overcome the TV model's disadvantageous tendency of uniformly penalizing the image gradient and over smoothing the low-contrast structures, an iterative algorithm based on the l0-norm optimization of the DGT is proposed. In order to rise to the challenges introduced by the l0-norm DGT, the algorithm uses a pseudo-inverse transform of DGT and adapts an iterative hard thresholding (IHT) algorithm, whose convergence and effective efficiency have been theoretically proven. The simulation demonstrates our conclusions and indicates that the algorithm proposed in this paper can obviously improve the reconstruction quality.展开更多
Least-squares reverse-time migration(LSRTM) formulates reverse-time migration(RTM) in the leastsquares inversion framework to obtain the optimal reflectivity image. It can generate images with more accurate amplitudes...Least-squares reverse-time migration(LSRTM) formulates reverse-time migration(RTM) in the leastsquares inversion framework to obtain the optimal reflectivity image. It can generate images with more accurate amplitudes, higher resolution, and fewer artifacts than RTM. However, three problems still exist:(1) inversion can be dominated by strong events in the residual;(2) low-wavenumber artifacts in the gradient affect convergence speed and imaging results;(3) high-wavenumber noise is also amplified as iteration increases. To solve these three problems, we have improved LSRTM: firstly, we use Hubernorm as the objective function to emphasize the weak reflectors during the inversion;secondly, we adapt the de-primary imaging condition to remove the low-wavenumber artifacts above strong reflectors as well as the false high-wavenumber reflectors in the gradient;thirdly, we apply the L1-norm sparse constraint in the curvelet-domain as the regularization term to suppress the high-wavenumber migration noise. As the new inversion objective function contains the non-smooth L1-norm, we use a modified iterative soft thresholding(IST) method to update along the Polak-Ribie re conjugate-gradient direction by using a preconditioned non-linear conjugate-gradient(PNCG) method. The numerical examples,especially the Sigsbee2 A model, demonstrate that the Huber inversion-based RTM can generate highquality images by mitigating migration artifacts and improving the contribution of weak reflection events.展开更多
This paper presents an intelligent protograph construction algorithm.Protograph LDPC codes have shown excellent error correction performance and play an important role in wireless communications.Random search or manua...This paper presents an intelligent protograph construction algorithm.Protograph LDPC codes have shown excellent error correction performance and play an important role in wireless communications.Random search or manual construction are often used to obtain a good protograph,but the efficiency is not high enough and many experience and skills are needed.In this paper,a fast searching algorithm is proposed using the convolution neural network to predict the iterative decoding thresholds of protograph LDPC codes effectively.A special input data transformation rule is applied to provide stronger generalization ability.The proposed algorithm converges faster than other algorithms.The iterative decoding threshold of the constructed protograph surpasses greedy algorithm and random search by about 0.53 dB and 0.93 dB respectively under 100 times of density evolution.Simulation results show that quasi-cyclic LDPC(QC-LDPC)codes constructed from the proposed algorithm have competitive performance compared to other papers.展开更多
In view of weak defect signals and large acoustic emission(AE) data in low speed bearing condition monitoring, we propose a bearing fault diagnosis technique based on a combination of empirical mode decomposition(EMD)...In view of weak defect signals and large acoustic emission(AE) data in low speed bearing condition monitoring, we propose a bearing fault diagnosis technique based on a combination of empirical mode decomposition(EMD), clear iterative interval threshold(CIIT) and the kernel-based fuzzy c-means(KFCM) eigenvalue extraction. In this technique, we use EMD-CIIT and EMD to complete the noise removal and to extract the intrinsic mode functions(IMFs). Then we select the first three IMFs and calculate their histogram entropies as the main fault features. These features are used for bearing fault classification using KFCM technique. The result shows that the combined EMD-CIIT and KFCM algorithm can accurately identify various bearing faults based on AE signals acquired from a low speed bearing test rig.展开更多
Manual operation of gauge calibrator is tedious and time-consuming.The automatic transformation of calibrator is realized through the integration of motion control and machine vision technology.Two closed-loop positio...Manual operation of gauge calibrator is tedious and time-consuming.The automatic transformation of calibrator is realized through the integration of motion control and machine vision technology.Two closed-loop positioning subsystems are composed of digital caliper length standard,chain drive mechanism and stepping motor to realize automatic motion control of gauge and ultra-high parameter measurement points.According to the luminance,geometric shape structure and fitting feature information,all kinds of indicator features and their relationships are identified in the display dial image to realize machine vision reading.The noise is removed by image difference method,the image is binarized by adaptive iterative threshold method,and the cross section contour of the track is obtained by expansion and thinning algorithm.The experimental results show that this method can effectively realize the high-precision dynamic measurement of gauge parameters.展开更多
The iterative hard thresholding(IHT)algorithm is a powerful and efficient algorithm for solving l_(0)-regularized problems and inspired many applications in sparse-approximation and image-processing fields.Recently,so...The iterative hard thresholding(IHT)algorithm is a powerful and efficient algorithm for solving l_(0)-regularized problems and inspired many applications in sparse-approximation and image-processing fields.Recently,some convergence results are established for the proximal scheme of IHT,namely proximal iterative hard thresholding(PIHT)algorithm(Blumensath and Davies,in J Fourier Anal Appl 14:629–654,2008;Hu et al.,Methods 67:294–303,2015;Lu,Math Program 147:125–154,2014;Trzasko et al.,IEEE/SP 14th Workshop on Statistical Signal Processing,2007)on solving the related l_(0)-optimization problems.However,the complexity analysis for the PIHT algorithm is not well explored.In this paper,we aim to provide some complexity estimations for the PIHT sequences.In particular,we show that the complexity of the sequential iterate error is at o(1/k).Under the assumption that the objective function is composed of a quadratic convex function and l_(0)regularization,we show that the PIHT algorithm has R-linear convergence rate.Finally,we illustrate some applications of this algorithm for compressive sensing reconstruction and sparse learning and validate the estimated error bounds.展开更多
To address the incomplete problem in pulmonary parenchyma segmentation based on the traditional methods, a novel automated segmentation method based on an eight- neighbor region growing algorithm with left-right scann...To address the incomplete problem in pulmonary parenchyma segmentation based on the traditional methods, a novel automated segmentation method based on an eight- neighbor region growing algorithm with left-right scanning and four-corner rotating and scanning is proposed in this pa- per. The proposed method consists of four main stages: image binarization, rough segmentation of lung, image denoising and lung contour refining. First, the binarization of images is done and the regions of interest are extracted. After that, the rough segmentation of lung is performed through a general region growing method. Then the improved eight-neighbor region growing is used to remove noise for the upper, mid- dle, and bottom region of lung. Finally, corrosion and ex- pansion operations are utilized to smooth the lung boundary. The proposed method was validated on chest positron emis- sion tomography-computed tomography (PET-CT) data of 30 cases from a hospital in Shanxi, China. Experimental results show that our method can achieve an average volume overlap ratio of 96.21 ± 0.39% with the manual segmentation results. Compared with the existing methods, the proposed algorithm segments the lung in PET-CT images more efficiently and ac- curately.展开更多
Iterative hard thresholding(IHT)and compressive sampling matching pursuit(CoSaMP)are two mainstream compressed sensing algorithms using the hard thresholding operator.The guaranteed performance of the two algorithms f...Iterative hard thresholding(IHT)and compressive sampling matching pursuit(CoSaMP)are two mainstream compressed sensing algorithms using the hard thresholding operator.The guaranteed performance of the two algorithms for signal recovery was mainly analyzed in terms of the restricted isometry property(RIP)of sensing matrices.At present,the best known bound using the RIP of order 3k for guaranteed performance of IHT(with the unit stepsize)isδ3k<1/√3≈0.5774,and the bound for CoSaMP using the RIP of order 4k isδ4k<0.4782.A fundamental question in this area is whether such theoretical results can be further improved.The purpose of this paper is to affirmatively answer this question and to rigorously show that the abovementioned RIP bound for guaranteed performance of IHT can be significantly improved toδ3k<(√5−1)/2≈0.618,and the bound for CoSaMP can be improved toδ4k<0.5102.展开更多
The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to ap...The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to apply nonconvex penalties such as lp norm.As one method for solving lp minimization problems,iteratively reweighted l1 minimization updates the weight for each component based on the value of the same component at the previous iteration.It assigns large weights on small components in magnitude and small weights on large components in magnitude.The set of the weights is not fixed,and it makes the analysis of this method difficult.In this paper,we consider a weighted l1 penalty with the set of the weights fixed,and the weights are assigned based on the sort of all the components in magnitude.The smallest weight is assigned to the largest component in magnitude.This new penalty is called nonconvex sorted l1.Then we propose two methods for solving nonconvex sorted l1 minimization problems:iteratively reweighted l1 minimization and iterative sorted thresholding,and prove that both methods will converge to a local minimizer of the nonconvex sorted l1 minimization problems.We also show that both methods are generalizations of iterative support detection and iterative hard thresholding,respectively.The numerical experiments demonstrate the better performance of assigning weights by sort compared to assigning by value.展开更多
基金supported by the National Natural Science Foundation of China under Grant No.21933006.
文摘To eliminate distortion caused by vertical drift and illusory slopes in atomic force microscopy(AFM)imaging,a lifting-wavelet-based iterative thresholding correction method is proposed in this paper.This method achieves high-quality AFM imaging via line-by-line corrections for each distorted profile along the fast axis.The key to this line-by-line correction is to accurately simulate the profile distortion of each scanning row.Therefore,a data preprocessing approach is first developed to roughly filter out most of the height data that impairs the accuracy of distortion modeling.This process is implemented through an internal double-screening mechanism.A line-fitting method is adopted to preliminarily screen out the obvious specimens.Lifting wavelet analysis is then carried out to identify the base parts that are mistakenly filtered out as specimens so as to preserve most of the base profiles and provide a good basis for further distortion modeling.Next,an iterative thresholding algorithm is developed to precisely simulate the profile distortion.By utilizing the roughly screened base profile,the optimal threshold,which is used to screen out the pure bases suitable for distortion modeling,is determined through iteration with a specified error rule.On this basis,the profile distortion is accurately modeled through line fitting on the finely screened base data,and the correction is implemented by subtracting the modeling result from the distorted profile.Finally,the effectiveness of the proposed method is verified through experiments and applications.
基金National Natural Science Foundation of China under Grant 42304145Jiangxi Provincial Natural Science Foundation under Grant 20242BAB26051,20242BAB25191 and 20232BAB213077+1 种基金Foundation of National Key Laboratory of Uranium Resources Exploration-Mining and Nuclear Remote Sensing under Grant 2024QZ-TD-13Open Fund(FW0399-0002)of SINOPEC Key Laboratory of Geophysics。
文摘Data reconstruction is a crucial step in seismic data preprocessing.To improve reconstruction speed and save memory,the commonly used three-dimensional(3D)seismic data reconstruction method divides the missing data into a series of time slices and independently reconstructs each time slice.However,when this strategy is employed,the potential correlations between two adjacent time slices are ignored,which degrades reconstruction performance.Therefore,this study proposes the use of a two-dimensional curvelet transform and the fast iterative shrinkage thresholding algorithm for data reconstruction.Based on the significant overlapping characteristics between the curvelet coefficient support sets of two adjacent time slices,a weighted operator is constructed in the curvelet domain using the prior support set provided by the previous reconstructed time slice to delineate the main energy distribution range,eff ectively providing prior information for reconstructing adjacent slices.Consequently,the resulting weighted fast iterative shrinkage thresholding algorithm can be used to reconstruct 3D seismic data.The processing of synthetic and field data shows that the proposed method has higher reconstruction accuracy and faster computational speed than the conventional fast iterative shrinkage thresholding algorithm for handling missing 3D seismic data.
基金the National Science & Technology Major Projects(Grant No.2008ZX05023-005-013).
文摘In this paper,we explore the use of iterative curvelet thresholding for seismic random noise attenuation.A new method for combining the curvelet transform with iterative thresholding to suppress random noise is demonstrated and the issue is described as a linear inverse optimal problem using the L1 norm.Random noise suppression in seismic data is transformed into an L1 norm optimization problem based on the curvelet sparsity transform. Compared to the conventional methods such as median filter algorithm,FX deconvolution, and wavelet thresholding,the results of synthetic and field data processing show that the iterative curvelet thresholding proposed in this paper can sufficiently improve signal to noise radio(SNR) and give higher signal fidelity at the same time.Furthermore,to make better use of the curvelet transform such as multiple scales and multiple directions,we control the curvelet direction of the result after iterative curvelet thresholding to further improve the SNR.
基金supported by the National Natural Science Foundation of China under Grant 42474139the Key Research and Development Program of Shaanxi under Grant 2024GX-YBXM-067.
文摘Seismic time-frequency(TF)transforms are essential tools in reservoir interpretation and signal processing,particularly for characterizing frequency variations in non-stationary seismic data.Recently,sparse TF trans-forms,which leverage sparse coding(SC),have gained significant attention in the geosciences due to their ability to achieve high TF resolution.However,the iterative approaches typically employed in sparse TF transforms are computationally intensive,making them impractical for real seismic data analysis.To address this issue,we propose an interpretable convolutional sparse coding(CSC)network to achieve high TF resolution.The proposed model is generated based on the traditional short-time Fourier transform(STFT)transform and a modified UNet,named ULISTANet.In this design,we replace the conventional convolutional layers of the UNet with learnable iterative shrinkage thresholding algorithm(LISTA)blocks,a specialized form of CSC.The LISTA block,which evolves from the traditional iterative shrinkage thresholding algorithm(ISTA),is optimized for extracting sparse features more effectively.Furthermore,we create a synthetic dataset featuring complex frequency-modulated signals to train ULISTANet.Finally,the proposed method’s performance is subsequently validated using both synthetic and field data,demonstrating its potential for enhanced seismic data analysis.
基金Supported by the National Natural Science Foundation ofChina(No.61271240)Jiangsu Province Natural Science Fund Project(No.BK2010077)Subject of Twelfth Five Years Plans in Jiangsu Second Normal University(No.417103)
文摘Matrix completion is the extension of compressed sensing.In compressed sensing,we solve the underdetermined equations using sparsity prior of the unknown signals.However,in matrix completion,we solve the underdetermined equations based on sparsity prior in singular values set of the unknown matrix,which also calls low-rank prior of the unknown matrix.This paper firstly introduces basic concept of matrix completion,analyses the matrix suitably used in matrix completion,and shows that such matrix should satisfy two conditions:low rank and incoherence property.Then the paper provides three reconstruction algorithms commonly used in matrix completion:singular value thresholding algorithm,singular value projection,and atomic decomposition for minimum rank approximation,puts forward their shortcoming to know the rank of original matrix.The Projected Gradient Descent based on Soft Thresholding(STPGD),proposed in this paper predicts the rank of unknown matrix using soft thresholding,and iteratives based on projected gradient descent,thus it could estimate the rank of unknown matrix exactly with low computational complexity,this is verified by numerical experiments.We also analyze the convergence and computational complexity of the STPGD algorithm,point out this algorithm is guaranteed to converge,and analyse the number of iterations needed to reach reconstruction error.Compared the computational complexity of the STPGD algorithm to other algorithms,we draw the conclusion that the STPGD algorithm not only reduces the computational complexity,but also improves the precision of the reconstruction solution.
基金Science Research Foundation of Yunnan Fundamental Research Foundation of Applicationgrant number:2009ZC049M+1 种基金Science Research Foundation for the Overseas Chinese Scholars,State Education Ministrygrant number:2010-1561
文摘This paper proposes an image segmentation method based on the combination of the wavelet multi-scale edge detection and the entropy iterative threshold selection.Image for segmentation is divided into two parts by high- and low-frequency.In the high-frequency part the wavelet multiscale was used for the edge detection,and the low-frequency part conducted on segmentation using the entropy iterative threshold selection method.Through the consideration of the image edge and region,a CT image of the thorax was chosen to test the proposed method for the segmentation of the lungs.Experimental results show that the method is efficient to segment the interesting region of an image compared with conventional methods.
文摘Today the error correcting codes are present in all the telecom standards, in particular the low density parity check (LDPC) codes. The choice of a good code for a given network is essentially linked to the decoding performance obtained by the bit error rate (BER) curves. This approach requires a significant simulation time proportional to the length of the code, to overcome this problem Exit chart was introduced, as a fast technique to predict the performance of a particular class of codes called Turbo codes. In this paper, we success to apply Exit chart to analyze convergence behavior of iterative threshold decoding of one step majority logic decodable (OSMLD) codes. The iterative decoding process uses a soft-input soft-output threshold decoding algorithm as component decoder. Simulation results for iterative decoding of simple and concatenated codes transmitted over a Gaussian channel have shown that the thresholds obtained are a good indicator of the Bit Error Rate (BER) curves.
基金supported by Natural Science Foundation of Jilin Province(YDZJ202401352ZYTS).
文摘To address the issues of low accuracy and high false positive rate in traditional Otsu algorithm for defect detection on infrared images of wind turbine blades(WTB),this paper proposes a technique that combines morphological image enhancement with an improved Otsu algorithm.First,mathematical morphology’s differential multi-scale white and black top-hat operations are applied to enhance the image.The algorithm employs entropy as the objective function to guide the iteration process of image enhancement,selecting appropriate structural element scales to execute differential multi-scale white and black top-hat transformations,effectively enhancing the detail features of defect regions and improving the contrast between defects and background.Afterwards,grayscale inversion is performed on the enhanced infrared defect image to better adapt to the improved Otsu algorithm.Finally,by introducing a parameter K to adjust the calculation of inter-class variance in the Otsu method,the weight of the target pixels is increased.Combined with the adaptive iterative threshold algorithm,the threshold selection process is further fine-tuned.Experimental results show that compared to traditional Otsu algorithms and other improvements,the proposed method has significant advantages in terms of defect detection accuracy and reducing false positive rates.The average defect detection rate approaches 1,and the average Hausdorff distance decreases to 0.825,indicating strong robustness and accuracy of the method.
基金supported by the National Natural Science Foundation of China(No.11901368).
文摘In this paper,we propose a variable metric extrapolation proximal iterative hard thresholding(VMEPIHT)method for nonconvex\ell_0-norm sparsity regularization problem which has wide applications in signal and image processing,machine learning and so on.The VMEPIHT method is based on the forward-backward splitting(FBS)method,and variable metric strategy is employed in the extrapolation step to speed up the algorithm.The proposed method’s convergence,linear convergence rate and superlinear convergence rate are shown under appropriate assumptions.Finally,we conduct numerical experiments on compressed sensing problem and CT image reconstruction problem to confirm the efficiency of the proposed method,compared with other state-of-the-art methods.
基金supported by the National Natural Science Foundation of China (Grant No.41074133)
文摘Missing data are a problem in geophysical surveys, and interpolation and reconstruction of missing data is part of the data processing and interpretation. Based on the sparseness of the geophysical data or the transform domain, we can improve the accuracy and stability of the reconstruction by transforming it to a sparse optimization problem. In this paper, we propose a mathematical model for the sparse reconstruction of data based on the LO-norm minimization. Furthermore, we discuss two types of the approximation algorithm for the LO- norm minimization according to the size and characteristics of the geophysical data: namely, the iteratively reweighted least-squares algorithm and the fast iterative hard thresholding algorithm. Theoretical and numerical analysis showed that applying the iteratively reweighted least-squares algorithm to the reconstruction of potential field data exploits its fast convergence rate, short calculation time, and high precision, whereas the fast iterative hard thresholding algorithm is more suitable for processing seismic data, moreover, its computational efficiency is better than that of the traditional iterative hard thresholding algorithm.
文摘In the medical computer tomography (CT) field, total variation (TV), which is the l1-norm of the discrete gradient transform (DGT), is widely used as regularization based on the compressive sensing (CS) theory. To overcome the TV model's disadvantageous tendency of uniformly penalizing the image gradient and over smoothing the low-contrast structures, an iterative algorithm based on the l0-norm optimization of the DGT is proposed. In order to rise to the challenges introduced by the l0-norm DGT, the algorithm uses a pseudo-inverse transform of DGT and adapts an iterative hard thresholding (IHT) algorithm, whose convergence and effective efficiency have been theoretically proven. The simulation demonstrates our conclusions and indicates that the algorithm proposed in this paper can obviously improve the reconstruction quality.
基金supported by National Key R&D Program of China (No. 2018YFA0702502)NSFC (Grant No. 41974142, 42074129, and 41674114)+1 种基金Science Foundation of China University of Petroleum (Beijing) (Grant No. 2462020YXZZ005)State Key Laboratory of Petroleum Resources and Prospecting (Grant No. PRP/indep-42012)。
文摘Least-squares reverse-time migration(LSRTM) formulates reverse-time migration(RTM) in the leastsquares inversion framework to obtain the optimal reflectivity image. It can generate images with more accurate amplitudes, higher resolution, and fewer artifacts than RTM. However, three problems still exist:(1) inversion can be dominated by strong events in the residual;(2) low-wavenumber artifacts in the gradient affect convergence speed and imaging results;(3) high-wavenumber noise is also amplified as iteration increases. To solve these three problems, we have improved LSRTM: firstly, we use Hubernorm as the objective function to emphasize the weak reflectors during the inversion;secondly, we adapt the de-primary imaging condition to remove the low-wavenumber artifacts above strong reflectors as well as the false high-wavenumber reflectors in the gradient;thirdly, we apply the L1-norm sparse constraint in the curvelet-domain as the regularization term to suppress the high-wavenumber migration noise. As the new inversion objective function contains the non-smooth L1-norm, we use a modified iterative soft thresholding(IST) method to update along the Polak-Ribie re conjugate-gradient direction by using a preconditioned non-linear conjugate-gradient(PNCG) method. The numerical examples,especially the Sigsbee2 A model, demonstrate that the Huber inversion-based RTM can generate highquality images by mitigating migration artifacts and improving the contribution of weak reflection events.
基金supported in part with the Project on the Industry Key Technologies of Jiangsu Province(No.BE2017153)the Industry-University-Research Fund of ZTE Corporation.
文摘This paper presents an intelligent protograph construction algorithm.Protograph LDPC codes have shown excellent error correction performance and play an important role in wireless communications.Random search or manual construction are often used to obtain a good protograph,but the efficiency is not high enough and many experience and skills are needed.In this paper,a fast searching algorithm is proposed using the convolution neural network to predict the iterative decoding thresholds of protograph LDPC codes effectively.A special input data transformation rule is applied to provide stronger generalization ability.The proposed algorithm converges faster than other algorithms.The iterative decoding threshold of the constructed protograph surpasses greedy algorithm and random search by about 0.53 dB and 0.93 dB respectively under 100 times of density evolution.Simulation results show that quasi-cyclic LDPC(QC-LDPC)codes constructed from the proposed algorithm have competitive performance compared to other papers.
基金the Privileged Shandong Provincial Government’s “Taishan Scholar” Program
文摘In view of weak defect signals and large acoustic emission(AE) data in low speed bearing condition monitoring, we propose a bearing fault diagnosis technique based on a combination of empirical mode decomposition(EMD), clear iterative interval threshold(CIIT) and the kernel-based fuzzy c-means(KFCM) eigenvalue extraction. In this technique, we use EMD-CIIT and EMD to complete the noise removal and to extract the intrinsic mode functions(IMFs). Then we select the first three IMFs and calculate their histogram entropies as the main fault features. These features are used for bearing fault classification using KFCM technique. The result shows that the combined EMD-CIIT and KFCM algorithm can accurately identify various bearing faults based on AE signals acquired from a low speed bearing test rig.
文摘Manual operation of gauge calibrator is tedious and time-consuming.The automatic transformation of calibrator is realized through the integration of motion control and machine vision technology.Two closed-loop positioning subsystems are composed of digital caliper length standard,chain drive mechanism and stepping motor to realize automatic motion control of gauge and ultra-high parameter measurement points.According to the luminance,geometric shape structure and fitting feature information,all kinds of indicator features and their relationships are identified in the display dial image to realize machine vision reading.The noise is removed by image difference method,the image is binarized by adaptive iterative threshold method,and the cross section contour of the track is obtained by expansion and thinning algorithm.The experimental results show that this method can effectively realize the high-precision dynamic measurement of gauge parameters.
基金supported by the National Natural Science Foundation of China(No.91330102)973 program(No.2015CB856000).
文摘The iterative hard thresholding(IHT)algorithm is a powerful and efficient algorithm for solving l_(0)-regularized problems and inspired many applications in sparse-approximation and image-processing fields.Recently,some convergence results are established for the proximal scheme of IHT,namely proximal iterative hard thresholding(PIHT)algorithm(Blumensath and Davies,in J Fourier Anal Appl 14:629–654,2008;Hu et al.,Methods 67:294–303,2015;Lu,Math Program 147:125–154,2014;Trzasko et al.,IEEE/SP 14th Workshop on Statistical Signal Processing,2007)on solving the related l_(0)-optimization problems.However,the complexity analysis for the PIHT algorithm is not well explored.In this paper,we aim to provide some complexity estimations for the PIHT sequences.In particular,we show that the complexity of the sequential iterate error is at o(1/k).Under the assumption that the objective function is composed of a quadratic convex function and l_(0)regularization,we show that the PIHT algorithm has R-linear convergence rate.Finally,we illustrate some applications of this algorithm for compressive sensing reconstruction and sparse learning and validate the estimated error bounds.
文摘To address the incomplete problem in pulmonary parenchyma segmentation based on the traditional methods, a novel automated segmentation method based on an eight- neighbor region growing algorithm with left-right scanning and four-corner rotating and scanning is proposed in this pa- per. The proposed method consists of four main stages: image binarization, rough segmentation of lung, image denoising and lung contour refining. First, the binarization of images is done and the regions of interest are extracted. After that, the rough segmentation of lung is performed through a general region growing method. Then the improved eight-neighbor region growing is used to remove noise for the upper, mid- dle, and bottom region of lung. Finally, corrosion and ex- pansion operations are utilized to smooth the lung boundary. The proposed method was validated on chest positron emis- sion tomography-computed tomography (PET-CT) data of 30 cases from a hospital in Shanxi, China. Experimental results show that our method can achieve an average volume overlap ratio of 96.21 ± 0.39% with the manual segmentation results. Compared with the existing methods, the proposed algorithm segments the lung in PET-CT images more efficiently and ac- curately.
基金supported by National Natural Science Foundation of China(Grant Nos.12071307 and 61571384).
文摘Iterative hard thresholding(IHT)and compressive sampling matching pursuit(CoSaMP)are two mainstream compressed sensing algorithms using the hard thresholding operator.The guaranteed performance of the two algorithms for signal recovery was mainly analyzed in terms of the restricted isometry property(RIP)of sensing matrices.At present,the best known bound using the RIP of order 3k for guaranteed performance of IHT(with the unit stepsize)isδ3k<1/√3≈0.5774,and the bound for CoSaMP using the RIP of order 4k isδ4k<0.4782.A fundamental question in this area is whether such theoretical results can be further improved.The purpose of this paper is to affirmatively answer this question and to rigorously show that the abovementioned RIP bound for guaranteed performance of IHT can be significantly improved toδ3k<(√5−1)/2≈0.618,and the bound for CoSaMP can be improved toδ4k<0.5102.
基金This work is partially supported by European Research Council,the National Natural Science Foundation of China(No.11201079)the Fundamental Research Funds for the Central Universities of China(Nos.20520133238 and 20520131169)the National Natural Science Foundation of United States(Nos.DMS-0748839 and DMS-1317602).
文摘The l1 norm is the tight convex relaxation for the l0 norm and has been successfully applied for recovering sparse signals.However,for problems with fewer samples than required for accurate l1 recovery,one needs to apply nonconvex penalties such as lp norm.As one method for solving lp minimization problems,iteratively reweighted l1 minimization updates the weight for each component based on the value of the same component at the previous iteration.It assigns large weights on small components in magnitude and small weights on large components in magnitude.The set of the weights is not fixed,and it makes the analysis of this method difficult.In this paper,we consider a weighted l1 penalty with the set of the weights fixed,and the weights are assigned based on the sort of all the components in magnitude.The smallest weight is assigned to the largest component in magnitude.This new penalty is called nonconvex sorted l1.Then we propose two methods for solving nonconvex sorted l1 minimization problems:iteratively reweighted l1 minimization and iterative sorted thresholding,and prove that both methods will converge to a local minimizer of the nonconvex sorted l1 minimization problems.We also show that both methods are generalizations of iterative support detection and iterative hard thresholding,respectively.The numerical experiments demonstrate the better performance of assigning weights by sort compared to assigning by value.