This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection systems.Combining Strengths,Weaknesses,Opportunities,Th...This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection systems.Combining Strengths,Weaknesses,Opportunities,Threats(SWOT)analysis data with Variation Autoencoder(VAE)and Generative AdversarialNetwork(GAN)the network framework model(SAE-GAN),is proposed for environmental data reconstruction.The model combines two popular generative models,GAN and VAE,to generate features conditional on categorical data embedding after SWOT Analysis.The model is capable of generating features that resemble real feature distributions and adding sample factors to more accurately track individual sample data.Reconstructed data is used to retain more semantic information to generate features.The model was applied to species in Southern California,USA,citing SWOT analysis data to train the model.Experiments show that the model is capable of integrating data from more comprehensive analyses than traditional methods and generating high-quality reconstructed data from them,effectively solving the problem of insufficient data collection in development environments.The model is further validated by the Technique for Order Preference by Similarity to an Ideal Solution(TOPSIS)classification assessment commonly used in the environmental data domain.This study provides a reliable and rich source of training data for species introduction site selection systems and makes a significant contribution to ecological and sustainable development.展开更多
Data reconstruction is a crucial step in seismic data preprocessing.To improve reconstruction speed and save memory,the commonly used three-dimensional(3D)seismic data reconstruction method divides the missing data in...Data reconstruction is a crucial step in seismic data preprocessing.To improve reconstruction speed and save memory,the commonly used three-dimensional(3D)seismic data reconstruction method divides the missing data into a series of time slices and independently reconstructs each time slice.However,when this strategy is employed,the potential correlations between two adjacent time slices are ignored,which degrades reconstruction performance.Therefore,this study proposes the use of a two-dimensional curvelet transform and the fast iterative shrinkage thresholding algorithm for data reconstruction.Based on the significant overlapping characteristics between the curvelet coefficient support sets of two adjacent time slices,a weighted operator is constructed in the curvelet domain using the prior support set provided by the previous reconstructed time slice to delineate the main energy distribution range,eff ectively providing prior information for reconstructing adjacent slices.Consequently,the resulting weighted fast iterative shrinkage thresholding algorithm can be used to reconstruct 3D seismic data.The processing of synthetic and field data shows that the proposed method has higher reconstruction accuracy and faster computational speed than the conventional fast iterative shrinkage thresholding algorithm for handling missing 3D seismic data.展开更多
Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the ...Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the AMR method of radiation source signals based on two-dimensional data matrix and improved residual neural network is proposed in this paper.First,the time series of the radiation source signals are reconstructed into two-dimensional data matrix,which greatly simplifies the signal preprocessing process.Second,the depthwise convolution and large-size convolutional kernels based residual neural network(DLRNet)is proposed to improve the feature extraction capability of the AMR model.Finally,the model performs feature extraction and classification on the two-dimensional data matrix to obtain the recognition vector that represents the signal modulation type.Theoretical analysis and simulation results show that the AMR method based on two-dimensional data matrix and improved residual network can significantly improve the accuracy of the AMR method.The recognition accuracy of the proposed method maintains a high level greater than 90% even at -14 dB SNR.展开更多
In this advanced exploration, we focus on multiple parameters estimation in bistatic Multiple-Input Multiple-Output(MIMO) radar systems, a crucial technique for target localization and imaging. Our research innovative...In this advanced exploration, we focus on multiple parameters estimation in bistatic Multiple-Input Multiple-Output(MIMO) radar systems, a crucial technique for target localization and imaging. Our research innovatively addresses the joint estimation of the Direction of Departure(DOD), Direction of Arrival(DOA), and Doppler frequency for incoherent targets. We propose a novel approach that significantly reduces computational complexity by utilizing the TemporalSpatial Nested Sampling Model(TSNSM). Our methodology begins with a multi-linear mapping mechanism to efficiently eliminate unnecessary virtual Degrees of Freedom(DOFs) and reorganize the remaining ones. We then employ the Toeplitz matrix triple iteration reconstruction method, surpassing the traditional Temporal-Spatial Smoothing Window(TSSW) approach, to mitigate the single snapshot effect and reduce computational demands. We further refine the highdimensional ESPRIT algorithm for joint estimation of DOD, DOA, and Doppler frequency, eliminating the need for additional parameter pairing. Moreover, we meticulously derive the Cramér-Rao Bound(CRB) for the TSNSM. This signal model allows for a second expansion of DOFs in time and space domains, achieving high precision in target angle and Doppler frequency estimation with low computational complexity. Our adaptable algorithm is validated through simulations and is suitable for sparse array MIMO radars with various structures, ensuring higher precision in parameter estimation with less complexity burden.展开更多
Unmanned Aerial Vehicles(UAVs)have been considered to have great potential in supporting reliable and timely data harvesting for Sensor Nodes(SNs)from an Internet of Things(IoT)perspective.However,due to physical limi...Unmanned Aerial Vehicles(UAVs)have been considered to have great potential in supporting reliable and timely data harvesting for Sensor Nodes(SNs)from an Internet of Things(IoT)perspective.However,due to physical limitations,UAVs are unable to further process the harvested data and have to rely on terrestrial servers,thus extra spectrum resource is needed to convey the harvested data.To avoid the cost of extra servers and spectrum resources,in this paper,we consider a UAV-based data harvesting network supported by a Cell-Free massive Multiple-Input-Multiple-Output(CF-mMIMO)system,where a UAV is used to collect and transmit data from SNs to the central processing unit of CF-mMIMO system for processing.In order to avoid using additional spectrum resources,the entire bandwidth is shared among radio access networks and wireless fronthaul links.Moreover,considering the limited capacity of the fronthaul links,the compress-and-forward scheme is adopted.In this work,in order to maximize the ergodically achievable sum rate of SNs,the power allocation of ground access points,the compression of fronthaul links,and also the bandwidth fraction between radio access networks and wireless fronthaul links are jointly optimized.To avoid the high overhead introduced by computing ergodically achievable rates,we introduce an approximate problem,using the large-dimensional random matrix theory,which relies only on statistical channel state information.We solve the nontrivial problem in three steps and propose an algorithm based on weighted minimum mean square error and Dinkelbach’s methods to find solutions.Finally,simulation results show that the proposed algorithm converges quickly and outperforms the baseline algorithms.展开更多
Among hyperspectral imaging technologies, interferometric spectral imaging is widely used in remote sening due to advantages of large luminous flux and high resolution. However, with complicated mechanism, interferome...Among hyperspectral imaging technologies, interferometric spectral imaging is widely used in remote sening due to advantages of large luminous flux and high resolution. However, with complicated mechanism, interferometric imaging faces the impact of multi-stage degradation. Most exsiting interferometric spectrum reconstruction methods are based on tradition model-based framework with multiple steps, showing poor efficiency and restricted performance. Thus, we propose an interferometric spectrum reconstruction method based on degradation synthesis and deep learning.Firstly, based on imaging mechanism, we proposed an mathematical model of interferometric imaging to analyse the degradation components as noises and trends during imaging. The model consists of three stages, namely instrument degradation, sensing degradation, and signal-independent degradation process. Then, we designed calibration-based method to estimate parameters in the model, of which the results are used for synthesizing realistic dataset for learning-based algorithms.In addition, we proposed a dual-stage interferogram spectrum reconstruction framework, which supports pre-training and integration of denoising DNNs. Experiments exhibits the reliability of our degradation model and synthesized data, and the effectiveness of the proposed reconstruction method.展开更多
CircRNAs,widely found throughout the human bodies,play a crucial role in regulating various biological processes and are closely linked to complex human diseases.Investigating potential associations between circRNAs a...CircRNAs,widely found throughout the human bodies,play a crucial role in regulating various biological processes and are closely linked to complex human diseases.Investigating potential associations between circRNAs and diseases can enhance our understanding of diseases and provide new strategies and tools for early diagnosis,treatment,and disease prevention.However,existing models have limitations in accurately capturing similarities,handling the sparse and noise attributes of association networks,and fully leveraging bioinformatical aspects from multiple viewpoints.To address these issues,this study introduces a new non-negative matrix factorization-based framework called NMFMSN.First,we incorporate circRNA sequence data and disease semantic information to compute circRNA and disease similarity,respectively.Given the sparse known associations between circRNAs and diseases,we reconstruct the network to complete more associations by imputing missing links based on neighboring circRNA and disease interactions.Finally,we integrate these two similarity networks into a non-negative matrix factorization framework to identify potential circRNA-disease associations.Upon conducting 5-fold cross-validation and leave-one-out cross-validation,the AUC values for NMFMSN reach 0.9712 and 0.9768,respectively,outperforming the currently most advanced models.Case studies on lung cancer and hepatocellular carcinoma show that NMFMSN is a good way to predict new associations between circRNAs and diseases.展开更多
Although machine Learning has demonstrated exceptional applicability in thermographic inspection,precise defect reconstruction is still challenging,especially for complex defect profiles with limited defect sample div...Although machine Learning has demonstrated exceptional applicability in thermographic inspection,precise defect reconstruction is still challenging,especially for complex defect profiles with limited defect sample diversity.Thus,this paper proposes a self-enhancement defect reconstruction technique based on cycle-consistent generative adversarial network(Cycle-GAN)that accurately characterises complex defect profiles and generates reliable artificial thermal images for dataset augmentation,enhancing defect characterisation.By using a synthetic dataset from simulation and experiments,the network overcomes the limited samples problem by learning the diversity of complex defects from finite element modelling and obtaining the thermography uncertainty patterns from practical experiments.Then,an iterative strategy with a self-enhancement capability optimises the characterisation accuracy and data generation performance.The designed loss function structure with cycle consistency and identity loss constrains the GAN’s transfer variation to guarantee augmented data quality and defect reconstruction accuracy simultaneously,while the self-enhancement results significantly improve accuracy in thermal images and defect profile reconstruction.The experimental results demonstrate the feasibility of the proposed method by attaining high accuracy with optimal loss norm for defect profile reconstruction with a Recall score over 0.92.The scalability investigation of different materials and defect types is also discussed,highlighting its capability for diverse thermography quantification and automated inspection scenarios.展开更多
A novel gappy technology, gappy autoencoder with proper orthogonal decomposition(Gappy POD-AE), is proposed for reconstructing physical fields from sparse data. High-dimensional data are reduced via proper orthogonal ...A novel gappy technology, gappy autoencoder with proper orthogonal decomposition(Gappy POD-AE), is proposed for reconstructing physical fields from sparse data. High-dimensional data are reduced via proper orthogonal decomposition(POD),and low-dimensional data are used to train an autoencoder(AE). By integrating the POD operator with the decoder, a nonlinear solution form is established and incorporated into a new maximum-a-posteriori(MAP)-based objective for online reconstruction.The numerical results on the two-dimensional(2D) Bhatnagar-Gross-Krook-Boltzmann(BGK-Boltzmann) equation, wave equation, shallow-water equation, and satellite data show that Gappy POD-AE achieves higher accuracy than gappy proper orthogonal decomposition(Gappy POD), especially for the data with slowly decaying singular values,and is more efficient in training than gappy autoencoder(Gappy AE). The MAP-based formulation and new gappy procedure further enhance the reconstruction accuracy.展开更多
Irregular seismic data causes problems with multi-trace processing algorithms and degrades processing quality. We introduce the Projection onto Convex Sets (POCS) based image restoration method into the seismic data...Irregular seismic data causes problems with multi-trace processing algorithms and degrades processing quality. We introduce the Projection onto Convex Sets (POCS) based image restoration method into the seismic data reconstruction field to interpolate irregularly missing traces. For entire dead traces, we transfer the POCS iteration reconstruction process from the time to frequency domain to save computational cost because forward and reverse Fourier time transforms are not needed. In each iteration, the selection threshold parameter is important for reconstruction efficiency. In this paper, we designed two types of threshold models to reconstruct irregularly missing seismic data. The experimental results show that an exponential threshold can greatly reduce iterations and improve reconstruction efficiency compared to a linear threshold for the same reconstruction result. We also analyze the anti- noise and anti-alias ability of the POCS reconstruction method. Finally, theoretical model tests and real data examples indicate that the proposed method is efficient and applicable.展开更多
Missing data are a problem in geophysical surveys, and interpolation and reconstruction of missing data is part of the data processing and interpretation. Based on the sparseness of the geophysical data or the transfo...Missing data are a problem in geophysical surveys, and interpolation and reconstruction of missing data is part of the data processing and interpretation. Based on the sparseness of the geophysical data or the transform domain, we can improve the accuracy and stability of the reconstruction by transforming it to a sparse optimization problem. In this paper, we propose a mathematical model for the sparse reconstruction of data based on the LO-norm minimization. Furthermore, we discuss two types of the approximation algorithm for the LO- norm minimization according to the size and characteristics of the geophysical data: namely, the iteratively reweighted least-squares algorithm and the fast iterative hard thresholding algorithm. Theoretical and numerical analysis showed that applying the iteratively reweighted least-squares algorithm to the reconstruction of potential field data exploits its fast convergence rate, short calculation time, and high precision, whereas the fast iterative hard thresholding algorithm is more suitable for processing seismic data, moreover, its computational efficiency is better than that of the traditional iterative hard thresholding algorithm.展开更多
Seismic data contain random noise interference and are affected by irregular subsampling. Presently, most of the data reconstruction methods are carried out separately from noise suppression. Moreover, most data recon...Seismic data contain random noise interference and are affected by irregular subsampling. Presently, most of the data reconstruction methods are carried out separately from noise suppression. Moreover, most data reconstruction methods are not ideal for noisy data. In this paper, we choose the multiscale and multidirectional 2D curvelet transform to perform simultaneous data reconstruction and noise suppression of 3D seismic data. We introduce the POCS algorithm, the exponentially decreasing square root threshold, and soft threshold operator to interpolate the data at each time slice. A weighing strategy was introduced to reduce the reconstructed data noise. A 3D simultaneous data reconstruction and noise suppression method based on the curvelet transform was proposed. When compared with data reconstruction followed by denoizing and the Fourier transform, the proposed method is more robust and effective. The proposed method has important implications for data acquisition in complex areas and reconstructing missing traces.展开更多
Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes...Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes from two dimensional contours. With the development of measuring equipment, cloud points that contain more details of the object can be obtained conveniently. On the other hand, large quantity of sampled points brings difficulties to model reconstruction method. This paper first presents an algorithm to automatically reduce the number of cloud points under given tolerance. Triangle mesh surface from the simplified data set is reconstructed by the marching cubes algorithm. For various reasons, reconstructed mesh usually contains unwanted holes. An approach to create new triangles is proposed with optimized shape for covering the unexpected holes in triangle meshes. After hole filling, watertight triangle mesh can be directly output in STL format, which is widely used in rapid prototype manufacturing. Practical examples are included to demonstrate the method.展开更多
Traditional seismic data sampling follows the Nyquist sampling theorem. In this paper, we introduce the theory of compressive sensing (CS), breaking through the limitations of the traditional Nyquist sampling theore...Traditional seismic data sampling follows the Nyquist sampling theorem. In this paper, we introduce the theory of compressive sensing (CS), breaking through the limitations of the traditional Nyquist sampling theorem, rendering the coherent aliases of regular undersampling into harmless incoherent random noise using random undersampling, and effectively turning the reconstruction problem into a much simpler denoising problem. We introduce the projections onto convex sets (POCS) algorithm in the data reconstruction process, apply the exponential decay threshold parameter in the iterations, and modify the traditional reconstruction process that performs forward and reverse transforms in the time and space domain. We propose a new method that uses forward and reverse transforms in the space domain. The proposed method uses less computer memory and improves computational speed. We also analyze the antinoise and anti-aliasing ability of the proposed method, and compare the 2D and 3D data reconstruction. Theoretical models and real data show that the proposed method is effective and of practical importance, as it can reconstruct missing traces and reduce the exploration cost of complex data acquisition.展开更多
基金supported by the Fundamental Research Funds for the Liaoning Universities(LJ212410146025).
文摘This study examines the effectiveness of artificial intelligence techniques in generating high-quality environmental data for species introductory site selection systems.Combining Strengths,Weaknesses,Opportunities,Threats(SWOT)analysis data with Variation Autoencoder(VAE)and Generative AdversarialNetwork(GAN)the network framework model(SAE-GAN),is proposed for environmental data reconstruction.The model combines two popular generative models,GAN and VAE,to generate features conditional on categorical data embedding after SWOT Analysis.The model is capable of generating features that resemble real feature distributions and adding sample factors to more accurately track individual sample data.Reconstructed data is used to retain more semantic information to generate features.The model was applied to species in Southern California,USA,citing SWOT analysis data to train the model.Experiments show that the model is capable of integrating data from more comprehensive analyses than traditional methods and generating high-quality reconstructed data from them,effectively solving the problem of insufficient data collection in development environments.The model is further validated by the Technique for Order Preference by Similarity to an Ideal Solution(TOPSIS)classification assessment commonly used in the environmental data domain.This study provides a reliable and rich source of training data for species introduction site selection systems and makes a significant contribution to ecological and sustainable development.
基金National Natural Science Foundation of China under Grant 42304145Jiangxi Provincial Natural Science Foundation under Grant 20242BAB26051,20242BAB25191 and 20232BAB213077+1 种基金Foundation of National Key Laboratory of Uranium Resources Exploration-Mining and Nuclear Remote Sensing under Grant 2024QZ-TD-13Open Fund(FW0399-0002)of SINOPEC Key Laboratory of Geophysics。
文摘Data reconstruction is a crucial step in seismic data preprocessing.To improve reconstruction speed and save memory,the commonly used three-dimensional(3D)seismic data reconstruction method divides the missing data into a series of time slices and independently reconstructs each time slice.However,when this strategy is employed,the potential correlations between two adjacent time slices are ignored,which degrades reconstruction performance.Therefore,this study proposes the use of a two-dimensional curvelet transform and the fast iterative shrinkage thresholding algorithm for data reconstruction.Based on the significant overlapping characteristics between the curvelet coefficient support sets of two adjacent time slices,a weighted operator is constructed in the curvelet domain using the prior support set provided by the previous reconstructed time slice to delineate the main energy distribution range,eff ectively providing prior information for reconstructing adjacent slices.Consequently,the resulting weighted fast iterative shrinkage thresholding algorithm can be used to reconstruct 3D seismic data.The processing of synthetic and field data shows that the proposed method has higher reconstruction accuracy and faster computational speed than the conventional fast iterative shrinkage thresholding algorithm for handling missing 3D seismic data.
基金National Natural Science Foundation of China under Grant No.61973037China Postdoctoral Science Foundation under Grant No.2022M720419。
文摘Automatic modulation recognition(AMR)of radiation source signals is a research focus in the field of cognitive radio.However,the AMR of radiation source signals at low SNRs still faces a great challenge.Therefore,the AMR method of radiation source signals based on two-dimensional data matrix and improved residual neural network is proposed in this paper.First,the time series of the radiation source signals are reconstructed into two-dimensional data matrix,which greatly simplifies the signal preprocessing process.Second,the depthwise convolution and large-size convolutional kernels based residual neural network(DLRNet)is proposed to improve the feature extraction capability of the AMR model.Finally,the model performs feature extraction and classification on the two-dimensional data matrix to obtain the recognition vector that represents the signal modulation type.Theoretical analysis and simulation results show that the AMR method based on two-dimensional data matrix and improved residual network can significantly improve the accuracy of the AMR method.The recognition accuracy of the proposed method maintains a high level greater than 90% even at -14 dB SNR.
基金supported in part by the National Natural Science Foundation of China(No.62071476)in part by China Postdoctoral Science Foundation(No.2022M723879)in part by the Science and Technology Innovation Program of Hunan Province,China(No.2021RC3080)。
文摘In this advanced exploration, we focus on multiple parameters estimation in bistatic Multiple-Input Multiple-Output(MIMO) radar systems, a crucial technique for target localization and imaging. Our research innovatively addresses the joint estimation of the Direction of Departure(DOD), Direction of Arrival(DOA), and Doppler frequency for incoherent targets. We propose a novel approach that significantly reduces computational complexity by utilizing the TemporalSpatial Nested Sampling Model(TSNSM). Our methodology begins with a multi-linear mapping mechanism to efficiently eliminate unnecessary virtual Degrees of Freedom(DOFs) and reorganize the remaining ones. We then employ the Toeplitz matrix triple iteration reconstruction method, surpassing the traditional Temporal-Spatial Smoothing Window(TSSW) approach, to mitigate the single snapshot effect and reduce computational demands. We further refine the highdimensional ESPRIT algorithm for joint estimation of DOD, DOA, and Doppler frequency, eliminating the need for additional parameter pairing. Moreover, we meticulously derive the Cramér-Rao Bound(CRB) for the TSNSM. This signal model allows for a second expansion of DOFs in time and space domains, achieving high precision in target angle and Doppler frequency estimation with low computational complexity. Our adaptable algorithm is validated through simulations and is suitable for sparse array MIMO radars with various structures, ensuring higher precision in parameter estimation with less complexity burden.
基金supported in part by the Jiangsu Provincial Key Research and Development Program(No.BE2022068-2)in part by the National Natural Science Foundation of China under Grant 62201285+1 种基金in part by Young Elite Scientists Sponsorship Program by CAST under Grant 2022QNRC001in part by the Postgraduate Research&Practice Innovation Program of Jiangsu Province under Grant KYCX23_1012.
文摘Unmanned Aerial Vehicles(UAVs)have been considered to have great potential in supporting reliable and timely data harvesting for Sensor Nodes(SNs)from an Internet of Things(IoT)perspective.However,due to physical limitations,UAVs are unable to further process the harvested data and have to rely on terrestrial servers,thus extra spectrum resource is needed to convey the harvested data.To avoid the cost of extra servers and spectrum resources,in this paper,we consider a UAV-based data harvesting network supported by a Cell-Free massive Multiple-Input-Multiple-Output(CF-mMIMO)system,where a UAV is used to collect and transmit data from SNs to the central processing unit of CF-mMIMO system for processing.In order to avoid using additional spectrum resources,the entire bandwidth is shared among radio access networks and wireless fronthaul links.Moreover,considering the limited capacity of the fronthaul links,the compress-and-forward scheme is adopted.In this work,in order to maximize the ergodically achievable sum rate of SNs,the power allocation of ground access points,the compression of fronthaul links,and also the bandwidth fraction between radio access networks and wireless fronthaul links are jointly optimized.To avoid the high overhead introduced by computing ergodically achievable rates,we introduce an approximate problem,using the large-dimensional random matrix theory,which relies only on statistical channel state information.We solve the nontrivial problem in three steps and propose an algorithm based on weighted minimum mean square error and Dinkelbach’s methods to find solutions.Finally,simulation results show that the proposed algorithm converges quickly and outperforms the baseline algorithms.
文摘Among hyperspectral imaging technologies, interferometric spectral imaging is widely used in remote sening due to advantages of large luminous flux and high resolution. However, with complicated mechanism, interferometric imaging faces the impact of multi-stage degradation. Most exsiting interferometric spectrum reconstruction methods are based on tradition model-based framework with multiple steps, showing poor efficiency and restricted performance. Thus, we propose an interferometric spectrum reconstruction method based on degradation synthesis and deep learning.Firstly, based on imaging mechanism, we proposed an mathematical model of interferometric imaging to analyse the degradation components as noises and trends during imaging. The model consists of three stages, namely instrument degradation, sensing degradation, and signal-independent degradation process. Then, we designed calibration-based method to estimate parameters in the model, of which the results are used for synthesizing realistic dataset for learning-based algorithms.In addition, we proposed a dual-stage interferogram spectrum reconstruction framework, which supports pre-training and integration of denoising DNNs. Experiments exhibits the reliability of our degradation model and synthesized data, and the effectiveness of the proposed reconstruction method.
基金the Gansu Province Industrial Support Plan(No.2023CYZC-25)Natural Science Foundation of Gansu Province(No.23JRRA770)the National Natural Science Foundation of China(No.62162040)。
文摘CircRNAs,widely found throughout the human bodies,play a crucial role in regulating various biological processes and are closely linked to complex human diseases.Investigating potential associations between circRNAs and diseases can enhance our understanding of diseases and provide new strategies and tools for early diagnosis,treatment,and disease prevention.However,existing models have limitations in accurately capturing similarities,handling the sparse and noise attributes of association networks,and fully leveraging bioinformatical aspects from multiple viewpoints.To address these issues,this study introduces a new non-negative matrix factorization-based framework called NMFMSN.First,we incorporate circRNA sequence data and disease semantic information to compute circRNA and disease similarity,respectively.Given the sparse known associations between circRNAs and diseases,we reconstruct the network to complete more associations by imputing missing links based on neighboring circRNA and disease interactions.Finally,we integrate these two similarity networks into a non-negative matrix factorization framework to identify potential circRNA-disease associations.Upon conducting 5-fold cross-validation and leave-one-out cross-validation,the AUC values for NMFMSN reach 0.9712 and 0.9768,respectively,outperforming the currently most advanced models.Case studies on lung cancer and hepatocellular carcinoma show that NMFMSN is a good way to predict new associations between circRNAs and diseases.
基金supported by the UK EPSRC Platform Grant:Through-life performance:From science to instrumentation(Grant No.EP/P027121/1).
文摘Although machine Learning has demonstrated exceptional applicability in thermographic inspection,precise defect reconstruction is still challenging,especially for complex defect profiles with limited defect sample diversity.Thus,this paper proposes a self-enhancement defect reconstruction technique based on cycle-consistent generative adversarial network(Cycle-GAN)that accurately characterises complex defect profiles and generates reliable artificial thermal images for dataset augmentation,enhancing defect characterisation.By using a synthetic dataset from simulation and experiments,the network overcomes the limited samples problem by learning the diversity of complex defects from finite element modelling and obtaining the thermography uncertainty patterns from practical experiments.Then,an iterative strategy with a self-enhancement capability optimises the characterisation accuracy and data generation performance.The designed loss function structure with cycle consistency and identity loss constrains the GAN’s transfer variation to guarantee augmented data quality and defect reconstruction accuracy simultaneously,while the self-enhancement results significantly improve accuracy in thermal images and defect profile reconstruction.The experimental results demonstrate the feasibility of the proposed method by attaining high accuracy with optimal loss norm for defect profile reconstruction with a Recall score over 0.92.The scalability investigation of different materials and defect types is also discussed,highlighting its capability for diverse thermography quantification and automated inspection scenarios.
基金supported by the National Natural Science Foundation of China(No.12472197)。
文摘A novel gappy technology, gappy autoencoder with proper orthogonal decomposition(Gappy POD-AE), is proposed for reconstructing physical fields from sparse data. High-dimensional data are reduced via proper orthogonal decomposition(POD),and low-dimensional data are used to train an autoencoder(AE). By integrating the POD operator with the decoder, a nonlinear solution form is established and incorporated into a new maximum-a-posteriori(MAP)-based objective for online reconstruction.The numerical results on the two-dimensional(2D) Bhatnagar-Gross-Krook-Boltzmann(BGK-Boltzmann) equation, wave equation, shallow-water equation, and satellite data show that Gappy POD-AE achieves higher accuracy than gappy proper orthogonal decomposition(Gappy POD), especially for the data with slowly decaying singular values,and is more efficient in training than gappy autoencoder(Gappy AE). The MAP-based formulation and new gappy procedure further enhance the reconstruction accuracy.
基金financially supported by National 863 Program (Grants No.2006AA 09A 102-09)National Science and Technology of Major Projects ( Grants No.2008ZX0 5025-001-001)
文摘Irregular seismic data causes problems with multi-trace processing algorithms and degrades processing quality. We introduce the Projection onto Convex Sets (POCS) based image restoration method into the seismic data reconstruction field to interpolate irregularly missing traces. For entire dead traces, we transfer the POCS iteration reconstruction process from the time to frequency domain to save computational cost because forward and reverse Fourier time transforms are not needed. In each iteration, the selection threshold parameter is important for reconstruction efficiency. In this paper, we designed two types of threshold models to reconstruct irregularly missing seismic data. The experimental results show that an exponential threshold can greatly reduce iterations and improve reconstruction efficiency compared to a linear threshold for the same reconstruction result. We also analyze the anti- noise and anti-alias ability of the POCS reconstruction method. Finally, theoretical model tests and real data examples indicate that the proposed method is efficient and applicable.
基金supported by the National Natural Science Foundation of China (Grant No.41074133)
文摘Missing data are a problem in geophysical surveys, and interpolation and reconstruction of missing data is part of the data processing and interpretation. Based on the sparseness of the geophysical data or the transform domain, we can improve the accuracy and stability of the reconstruction by transforming it to a sparse optimization problem. In this paper, we propose a mathematical model for the sparse reconstruction of data based on the LO-norm minimization. Furthermore, we discuss two types of the approximation algorithm for the LO- norm minimization according to the size and characteristics of the geophysical data: namely, the iteratively reweighted least-squares algorithm and the fast iterative hard thresholding algorithm. Theoretical and numerical analysis showed that applying the iteratively reweighted least-squares algorithm to the reconstruction of potential field data exploits its fast convergence rate, short calculation time, and high precision, whereas the fast iterative hard thresholding algorithm is more suitable for processing seismic data, moreover, its computational efficiency is better than that of the traditional iterative hard thresholding algorithm.
基金sponsored by the National Natural Science Foundation of China(Nos.41304097 and 41664006)the Natural Science Foundation of Jiangxi Province(No.20151BAB203044)+1 种基金the China Scholarship Council(No.201508360061)Distinguished Young Talent Foundation of Jiangxi Province(2017)
文摘Seismic data contain random noise interference and are affected by irregular subsampling. Presently, most of the data reconstruction methods are carried out separately from noise suppression. Moreover, most data reconstruction methods are not ideal for noisy data. In this paper, we choose the multiscale and multidirectional 2D curvelet transform to perform simultaneous data reconstruction and noise suppression of 3D seismic data. We introduce the POCS algorithm, the exponentially decreasing square root threshold, and soft threshold operator to interpolate the data at each time slice. A weighing strategy was introduced to reduce the reconstructed data noise. A 3D simultaneous data reconstruction and noise suppression method based on the curvelet transform was proposed. When compared with data reconstruction followed by denoizing and the Fourier transform, the proposed method is more robust and effective. The proposed method has important implications for data acquisition in complex areas and reconstructing missing traces.
文摘Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes from two dimensional contours. With the development of measuring equipment, cloud points that contain more details of the object can be obtained conveniently. On the other hand, large quantity of sampled points brings difficulties to model reconstruction method. This paper first presents an algorithm to automatically reduce the number of cloud points under given tolerance. Triangle mesh surface from the simplified data set is reconstructed by the marching cubes algorithm. For various reasons, reconstructed mesh usually contains unwanted holes. An approach to create new triangles is proposed with optimized shape for covering the unexpected holes in triangle meshes. After hole filling, watertight triangle mesh can be directly output in STL format, which is widely used in rapid prototype manufacturing. Practical examples are included to demonstrate the method.
基金sponsored by the National Natural Science Foundation of China (No.41174107)the National Science and Technology projects of oil and gas (No.2011ZX05023-005)
文摘Traditional seismic data sampling follows the Nyquist sampling theorem. In this paper, we introduce the theory of compressive sensing (CS), breaking through the limitations of the traditional Nyquist sampling theorem, rendering the coherent aliases of regular undersampling into harmless incoherent random noise using random undersampling, and effectively turning the reconstruction problem into a much simpler denoising problem. We introduce the projections onto convex sets (POCS) algorithm in the data reconstruction process, apply the exponential decay threshold parameter in the iterations, and modify the traditional reconstruction process that performs forward and reverse transforms in the time and space domain. We propose a new method that uses forward and reverse transforms in the space domain. The proposed method uses less computer memory and improves computational speed. We also analyze the antinoise and anti-aliasing ability of the proposed method, and compare the 2D and 3D data reconstruction. Theoretical models and real data show that the proposed method is effective and of practical importance, as it can reconstruct missing traces and reduce the exploration cost of complex data acquisition.