期刊文献+
共找到715篇文章
< 1 2 36 >
每页显示 20 50 100
Minimum MSE Weighted Estimator to Make Inferences for a Common Risk Ratio across Sparse Meta-Analysis Data
1
作者 Chukiat Viwatwongkasem Sutthisak Srisawad +4 位作者 Pichitpong Soontornpipit Jutatip Sillabutra Pratana Satitvipawee Prasong Kitidamrongsuk Hathaikan Chootrakool 《Open Journal of Statistics》 2022年第1期49-69,共21页
The paper aims to discuss three interesting issues of statistical inferences for a common risk ratio (RR) in sparse meta-analysis data. Firstly, the conventional log-risk ratio estimator encounters a number of problem... The paper aims to discuss three interesting issues of statistical inferences for a common risk ratio (RR) in sparse meta-analysis data. Firstly, the conventional log-risk ratio estimator encounters a number of problems when the number of events in the experimental or control group is zero in sparse data of a 2 × 2 table. The adjusted log-risk ratio estimator with the continuity correction points  based upon the minimum Bayes risk with respect to the uniform prior density over (0, 1) and the Euclidean loss function is proposed. Secondly, the interest is to find the optimal weights of the pooled estimate  that minimize the mean square error (MSE) of  subject to the constraint on  where , , . Finally, the performance of this minimum MSE weighted estimator adjusted with various values of points  is investigated to compare with other popular estimators, such as the Mantel-Haenszel (MH) estimator and the weighted least squares (WLS) estimator (also equivalently known as the inverse-variance weighted estimator) in senses of point estimation and hypothesis testing via simulation studies. The results of estimation illustrate that regardless of the true values of RR, the MH estimator achieves the best performance with the smallest MSE when the study size is rather large  and the sample sizes within each study are small. The MSE of WLS estimator and the proposed-weight estimator adjusted by , or , or are close together and they are the best when the sample sizes are moderate to large (and) while the study size is rather small. 展开更多
关键词 Minimum MSE Weights Adjusted Log-Risk Ratio Estimator sparse meta-analysis data Continuity Correction
在线阅读 下载PDF
Geophysical data sparse reconstruction based on L0-norm minimization 被引量:6
2
作者 陈国新 陈生昌 +1 位作者 王汉闯 张博 《Applied Geophysics》 SCIE CSCD 2013年第2期181-190,236,共11页
Missing data are a problem in geophysical surveys, and interpolation and reconstruction of missing data is part of the data processing and interpretation. Based on the sparseness of the geophysical data or the transfo... Missing data are a problem in geophysical surveys, and interpolation and reconstruction of missing data is part of the data processing and interpretation. Based on the sparseness of the geophysical data or the transform domain, we can improve the accuracy and stability of the reconstruction by transforming it to a sparse optimization problem. In this paper, we propose a mathematical model for the sparse reconstruction of data based on the LO-norm minimization. Furthermore, we discuss two types of the approximation algorithm for the LO- norm minimization according to the size and characteristics of the geophysical data: namely, the iteratively reweighted least-squares algorithm and the fast iterative hard thresholding algorithm. Theoretical and numerical analysis showed that applying the iteratively reweighted least-squares algorithm to the reconstruction of potential field data exploits its fast convergence rate, short calculation time, and high precision, whereas the fast iterative hard thresholding algorithm is more suitable for processing seismic data, moreover, its computational efficiency is better than that of the traditional iterative hard thresholding algorithm. 展开更多
关键词 Geophysical data sparse reconstruction LO-norm minimization iterativelyreweighted least squares fast iterative hard thresholding
在线阅读 下载PDF
CABOSFV algorithm for high dimensional sparse data clustering 被引量:7
3
作者 Sen Wu Xuedong Gao Management School, University of Science and Technology Beijing, Beijing 100083, China 《Journal of University of Science and Technology Beijing》 CSCD 2004年第3期283-288,共6页
An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sp... An algorithm, Clustering Algorithm Based On Sparse Feature Vector (CABOSFV),was proposed for the high dimensional clustering of binary sparse data. This algorithm compressesthe data effectively by using a tool 'Sparse Feature Vector', thus reduces the data scaleenormously, and can get the clustering result with only one data scan. Both theoretical analysis andempirical tests showed that CABOSFV is of low computational complexity. The algorithm findsclusters in high dimensional large datasets efficiently and handles noise effectively. 展开更多
关键词 CLUSTERING data mining sparse high dimensionality
在线阅读 下载PDF
A generative deep learning framework for airfoil flow field prediction with sparse data 被引量:10
4
作者 Haizhou WU Xuejun LIU +1 位作者 Wei AN Hongqiang LYU 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2022年第1期470-484,共15页
Deep learning has been probed for the airfoil performance prediction in recent years.Compared with the expensive CFD simulations and wind tunnel experiments,deep learning models can be leveraged to somewhat mitigate s... Deep learning has been probed for the airfoil performance prediction in recent years.Compared with the expensive CFD simulations and wind tunnel experiments,deep learning models can be leveraged to somewhat mitigate such expenses with proper means.Nevertheless,effective training of the data-driven models in deep learning severely hinges on the data in diversity and quantity.In this paper,we present a novel data augmented Generative Adversarial Network(GAN),daGAN,for rapid and accurate flow filed prediction,allowing the adaption to the task with sparse data.The presented approach consists of two modules,pre-training module and fine-tuning module.The pre-training module utilizes a conditional GAN(cGAN)to preliminarily estimate the distribution of the training data.In the fine-tuning module,we propose a novel adversarial architecture with two generators one of which fulfils a promising data augmentation operation,so that the complement data is adequately incorporated to boost the generalization of the model.We use numerical simulation data to verify the generalization of daGAN on airfoils and flow conditions with sparse training data.The results show that daGAN is a promising tool for rapid and accurate evaluation of detailed flow field without the requirement for big training data. 展开更多
关键词 CFD Flow field Generative adversarial networks(GANs) sparse data Supercritical airfoil
原文传递
Fast Computation of Sparse Data Cubes with Constraints 被引量:2
5
作者 FengYu-cai ChenChang-qing FengJian-lin XiangLong-gang 《Wuhan University Journal of Natural Sciences》 EI CAS 2004年第2期167-172,共6页
For a data cube there are always constraints between dimensions or among attributes in a dimension, such as functional dependencies. We introduce the problem that when there are functional dependencies, how to use the... For a data cube there are always constraints between dimensions or among attributes in a dimension, such as functional dependencies. We introduce the problem that when there are functional dependencies, how to use them to speed up the computation of sparse data cubes. A new algorithm CFD (Computation by Functional Dependencies) is presented to satisfy this demand. CFD determines the order of dimensions by considering cardinalities of dimensions and functional dependencies between dimensions together, thus reduce the number of partitions for such dimensions. CFD also combines partitioning from bottom to up and aggregate computation from top to bottom to speed up the computation further. CFD can efficiently compute a data cube with hierarchies in a dimension from the smallest granularity to the coarsest one. Key words sparse data cube - functional dependency - dimension - partition - CFD CLC number TP 311 Foundation item: Supported by the E-Government Project of the Ministry of Science and Technology of China (2001BA110B01)Biography: Feng Yu-cai (1945-), male, Professor, research direction: database system. 展开更多
关键词 sparse data cube functional dependency DIMENSION PARTITION CFD
在线阅读 下载PDF
Physics-informed neural network-based petroleum reservoir simulation with sparse data using domain decomposition 被引量:4
6
作者 Jiang-Xia Han Liang Xue +4 位作者 Yun-Sheng Wei Ya-Dong Qi Jun-Lei Wang Yue-Tian Liu Yu-Qi Zhang 《Petroleum Science》 SCIE EI CAS CSCD 2023年第6期3450-3460,共11页
Recent advances in deep learning have expanded new possibilities for fluid flow simulation in petroleum reservoirs.However,the predominant approach in existing research is to train neural networks using high-fidelity ... Recent advances in deep learning have expanded new possibilities for fluid flow simulation in petroleum reservoirs.However,the predominant approach in existing research is to train neural networks using high-fidelity numerical simulation data.This presents a significant challenge because the sole source of authentic wellbore production data for training is sparse.In response to this challenge,this work introduces a novel architecture called physics-informed neural network based on domain decomposition(PINN-DD),aiming to effectively utilize the sparse production data of wells for reservoir simulation with large-scale systems.To harness the capabilities of physics-informed neural networks(PINNs)in handling small-scale spatial-temporal domain while addressing the challenges of large-scale systems with sparse labeled data,the computational domain is divided into two distinct sub-domains:the well-containing and the well-free sub-domain.Moreover,the two sub-domains and the interface are rigorously constrained by the governing equations,data matching,and boundary conditions.The accuracy of the proposed method is evaluated on two problems,and its performance is compared against state-of-the-art PINNs through numerical analysis as a benchmark.The results demonstrate the superiority of PINN-DD in handling large-scale reservoir simulation with limited data and show its potential to outperform conventional PINNs in such scenarios. 展开更多
关键词 Physical-informed neural networks Fluid flow simulation sparse data Domain decomposition
原文传递
Reconstruction method of irregular seismic data with adaptive thresholds based on different sparse transform bases 被引量:4
7
作者 Zhao Hu Yang Tun +4 位作者 Ni Yu-Dong Liu Xing-Gang Xu Yin-Po Zhang Yi-Lei Zhang Guang-Rong 《Applied Geophysics》 SCIE CSCD 2021年第3期345-360,432,共17页
Oil and gas seismic exploration have to adopt irregular seismic acquisition due to the increasingly complex exploration conditions to adapt to complex geological conditions and environments.However,the irregular seism... Oil and gas seismic exploration have to adopt irregular seismic acquisition due to the increasingly complex exploration conditions to adapt to complex geological conditions and environments.However,the irregular seismic acquisition is accompanied by the lack of acquisition data,which requires high-precision regularization.The sparse signal feature in the transform domain in compressed sensing theory is used in this paper to recover the missing signal,involving sparse transform base optimization and threshold modeling.First,this paper analyzes and compares the effects of six sparse transformation bases on the reconstruction accuracy and efficiency of irregular seismic data and establishes the quantitative relationship between sparse transformation and reconstruction accuracy and efficiency.Second,an adaptive threshold modeling method based on sparse coefficient is provided to improve the reconstruction accuracy.Test results show that the method has good adaptability to different seismic data and sparse transform bases.The f-x domain reconstruction method of effective frequency samples is studied to address the problem of low computational efficiency.The parallel computing strategy of curvelet transform combined with OpenMP is further proposed,which substantially improves the computational efficiency under the premise of ensuring the reconstruction accuracy.Finally,the actual acquisition data are used to verify the proposed method.The results indicate that the proposed method strategy can solve the regularization problem of irregular seismic data in production and improve the imaging quality of the target layer economically and efficiently. 展开更多
关键词 irregular acquisition seismic data reconstruction adaptive threshold f-x domain OpenMP parallel optimization sparse transformation
在线阅读 下载PDF
INTERPOLATION TECHNIQUE FOR SPARSE DATA BASED ON INFORMATION DIFFUSION PRINCIPLE-ELLIPSE MODEL 被引量:1
8
作者 张韧 黄志松 +1 位作者 李佳讯 刘巍 《Journal of Tropical Meteorology》 SCIE 2013年第1期59-66,共8页
Addressing the difficulties of scattered and sparse observational data in ocean science,a new interpolation technique based on information diffusion is proposed in this paper.Based on a fuzzy mapping idea,sparse data ... Addressing the difficulties of scattered and sparse observational data in ocean science,a new interpolation technique based on information diffusion is proposed in this paper.Based on a fuzzy mapping idea,sparse data samples are diffused and mapped into corresponding fuzzy sets in the form of probability in an interpolation ellipse model.To avoid the shortcoming of normal diffusion function on the asymmetric structure,a kind of asymmetric information diffusion function is developed and a corresponding algorithm-ellipse model for diffusion of asymmetric information is established.Through interpolation experiments and contrast analysis of the sea surface temperature data with ARGO data,the rationality and validity of the ellipse model are assessed. 展开更多
关键词 INFORMATION DIFFUSION INTERPOLATION algorithm sparse data ELLIPSE model
在线阅读 下载PDF
Probabilistic outlier detection for sparse multivariate geotechnical site investigation data using Bayesian learning 被引量:3
9
作者 Shuo Zheng Yu-Xin Zhu +3 位作者 Dian-Qing Li Zi-Jun Cao Qin-Xuan Deng Kok-Kwang Phoon 《Geoscience Frontiers》 SCIE CAS CSCD 2021年第1期425-439,共15页
Various uncertainties arising during acquisition process of geoscience data may result in anomalous data instances(i.e.,outliers)that do not conform with the expected pattern of regular data instances.With sparse mult... Various uncertainties arising during acquisition process of geoscience data may result in anomalous data instances(i.e.,outliers)that do not conform with the expected pattern of regular data instances.With sparse multivariate data obtained from geotechnical site investigation,it is impossible to identify outliers with certainty due to the distortion of statistics of geotechnical parameters caused by outliers and their associated statistical uncertainty resulted from data sparsity.This paper develops a probabilistic outlier detection method for sparse multivariate data obtained from geotechnical site investigation.The proposed approach quantifies the outlying probability of each data instance based on Mahalanobis distance and determines outliers as those data instances with outlying probabilities greater than 0.5.It tackles the distortion issue of statistics estimated from the dataset with outliers by a re-sampling technique and accounts,rationally,for the statistical uncertainty by Bayesian machine learning.Moreover,the proposed approach also suggests an exclusive method to determine outlying components of each outlier.The proposed approach is illustrated and verified using simulated and real-life dataset.It showed that the proposed approach properly identifies outliers among sparse multivariate data and their corresponding outlying components in a probabilistic manner.It can significantly reduce the masking effect(i.e.,missing some actual outliers due to the distortion of statistics by the outliers and statistical uncertainty).It also found that outliers among sparse multivariate data instances affect significantly the construction of multivariate distribution of geotechnical parameters for uncertainty quantification.This emphasizes the necessity of data cleaning process(e.g.,outlier detection)for uncertainty quantification based on geoscience data. 展开更多
关键词 Outlier detection Site investigation sparse multivariate data Mahalanobis distance Resampling by half-means Bayesian machine learning
在线阅读 下载PDF
Sparse Seismic Data Reconstruction Based on a Convolutional Neural Network Algorithm 被引量:1
10
作者 HOU Xinwei TONG Siyou +3 位作者 WANG Zhongcheng XU Xiugang PENG Yin WANG Kai 《Journal of Ocean University of China》 SCIE CAS CSCD 2023年第2期410-418,共9页
At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achievi... At present,the acquisition of seismic data is developing toward high-precision and high-density methods.However,complex natural environments and cultural factors in many exploration areas cause difficulties in achieving uniform and intensive acquisition,which makes complete seismic data collection impossible.Therefore,data reconstruction is required in the processing link to ensure imaging accuracy.Deep learning,as a new field in rapid development,presents clear advantages in feature extraction and modeling.In this study,the convolutional neural network deep learning algorithm is applied to seismic data reconstruction.Based on the convolutional neural network algorithm and combined with the characteristics of seismic data acquisition,two training strategies of supervised and unsupervised learning are designed to reconstruct sparse acquisition seismic records.First,a supervised learning strategy is proposed for labeled data,wherein the complete seismic data are segmented as the input of the training set and are randomly sampled before each training,thereby increasing the number of samples and the richness of features.Second,an unsupervised learning strategy based on large samples is proposed for unlabeled data,and the rolling segmentation method is used to update(pseudo)labels and training parameters in the training process.Through the reconstruction test of simulated and actual data,the deep learning algorithm based on a convolutional neural network shows better reconstruction quality and higher accuracy than compressed sensing based on Curvelet transform. 展开更多
关键词 deep learning convolutional neural network seismic data reconstruction compressed sensing sparse collection supervised learning unsupervised learning
在线阅读 下载PDF
Meta-analysis with zero-event studies:a comparative study with application to COVID-19 data
11
作者 Jia-Jin Wei En-Xuan Lin +4 位作者 Jian-Dong Shi Ke Yang Zong-Liang Hu Xian-Tao Zeng Tie-Jun Tong 《Military Medical Research》 SCIE CSCD 2022年第1期126-137,共12页
Background:Meta-analysis is a statistical method to synthesize evidence from a number of independent studies,including those from clinical studies with binary outcomes.In practice,when there are zero events in one or ... Background:Meta-analysis is a statistical method to synthesize evidence from a number of independent studies,including those from clinical studies with binary outcomes.In practice,when there are zero events in one or both groups,it may cause statistical problems in the subsequent analysis.Methods:In this paper,by considering the relative risk as the effect size,we conduct a comparative study that consists of four continuity correction methods and another state-of-the-art method without the continuity correction,namely the generalized linear mixed models(GLMMs).To further advance the literature,we also introduce a new method of the continuity correction for estimating the relative risk.Results:From the simulation studies,the new method performs well in terms of mean squared error when there are few studies.In contrast,the generalized linear mixed model performs the best when the number of studies is large.In addition,by reanalyzing recent coronavirus disease 2019(COVID-19)data,it is evident that the double-zero-event studies impact the estimate of the mean effect size.Conclusions:We recommend the new method to handle the zero-event studies when there are few studies in a meta-analysis,or instead use the GLMM when the number of studies is large.The double-zero-event studies may be informative,and so we suggest not excluding them. 展开更多
关键词 Continuity correction Coronavirus disease 2019 data meta-analysis Relative risk Zero-event studies
原文传递
Data Aggregation: A Proposed Psychometric IPD Meta-Analysis
12
作者 Esther Kaufmann 《Open Journal of Statistics》 2018年第1期38-48,共11页
Individual participant data (IPD) meta-analysis was developed to overcome several meta-analytical pitfalls of classical meta-analysis. One advantage of classical psychometric meta-analysis over IPD meta-analysis is th... Individual participant data (IPD) meta-analysis was developed to overcome several meta-analytical pitfalls of classical meta-analysis. One advantage of classical psychometric meta-analysis over IPD meta-analysis is the corrections of the aggregated unit of studies, namely study differences, i.e., artifacts, such as measurement error. Without these corrections on a study level, meta-analysts may assume moderator variables instead of artifacts between studies. The psychometric correction of the aggregation unit of individuals in IPD meta-analysis has been neglected by IPD meta-analysts thus far. In this paper, we present the adaptation of a psychometric approach for IPD meta-analysis to account for the differences in the aggregation unit of individuals to overcome differences between individuals. We introduce the reader to this approach using the aggregation of lens model studies on individual data as an example, and lay out different application possibilities for the future (e.g., big data analysis). Our suggested psychometric IPD meta-analysis supplements the meta-analysis approaches within the field and is a suitable alternative for future analysis. 展开更多
关键词 data Aggregation meta-analysis BIAS IPD meta-analysis PSYCHOMETRIC meta-analysis BIG data
暂未订购
A novel sparse feature extraction method based on sparse signal via dual-channel self-adaptive TQWT 被引量:4
13
作者 Junlin LI Huaqing WANG Liuyang SONG 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2021年第7期157-169,共13页
Sparse signal is a kind of sparse matrices which can carry fault information and simplify the signal at the same time.This can effectively reduce the cost of signal storage,improve the efficiency of data transmission,... Sparse signal is a kind of sparse matrices which can carry fault information and simplify the signal at the same time.This can effectively reduce the cost of signal storage,improve the efficiency of data transmission,and ultimately save the cost of equipment fault diagnosis in the aviation field.At present,the existing sparse decomposition methods generally extract sparse fault characteristics signals based on orthogonal basis atoms,which limits the adaptability of sparse decomposition.In this paper,a self-adaptive atom is extracted by the improved dual-channel tunable Q-factor wavelet transform(TQWT)method to construct a self-adaptive complete dictionary.Finally,the sparse signal is obtained by the orthogonal matching pursuit(OMP)algorithm.The atoms obtained by this method are more flexible,and are no longer constrained to an orthogonal basis to reflect the oscillation characteristics of signals.Therefore,the sparse signal can better extract the fault characteristics.The simulation and experimental results show that the selfadaptive dictionary with the atom extracted from the dual-channel TQWT has a stronger decomposition freedom and signal matching ability than orthogonal basis dictionaries,such as discrete cosine transform(DCT),discrete Hartley transform(DHT)and discrete wavelet transform(DWT).In addition,the sparse signal extracted by the self-adaptive complete dictionary can reflect the time-domain characteristics of the vibration signals,and can more accurately extract the bearing fault feature frequency. 展开更多
关键词 Complete dictionary data transmission Fault diagnosis sparse matrices sparse signal Wavelet transform
原文传递
Randomized Latent Factor Model for High-dimensional and Sparse Matrices from Industrial Applications 被引量:14
14
作者 Mingsheng Shang Xin Luo +3 位作者 Zhigang Liu Jia Chen Ye Yuan MengChu Zhou 《IEEE/CAA Journal of Automatica Sinica》 EI CSCD 2019年第1期131-141,共11页
Latent factor(LF)models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS)matrices which are commonly seen in various industrial applications.An LF model usually adopts iterativ... Latent factor(LF)models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS)matrices which are commonly seen in various industrial applications.An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost.Hence,determining how to accelerate the training process for LF models has become a significant issue.To address this,this work proposes a randomized latent factor(RLF)model.It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices,thereby greatly alleviating computational burden.It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models,RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices,which is especially desired for industrial applications demanding highly efficient models. 展开更多
关键词 Big data high-dimensional and sparse matrix latent factor analysis latent factor model randomized learning
在线阅读 下载PDF
Unbalanced classification method using least squares support vector machine with sparse strategy for steel surface defects with label noise 被引量:1
15
作者 Li-ming Liu Mao-xiang Chu +1 位作者 Rong-fen Gong Xin-yu Qi 《Journal of Iron and Steel Research International》 SCIE EI CAS CSCD 2020年第12期1407-1419,共13页
Least squares support vector machine (LS-SVM) plays an important role in steel surface defects classification because of its high speed. However, the defect samples obtained from the real production line may be noise.... Least squares support vector machine (LS-SVM) plays an important role in steel surface defects classification because of its high speed. However, the defect samples obtained from the real production line may be noise. LS-SVM suffers from the poor classification performance in the classification stage when there are noise samples. Thus, in the classification stage, it is necessary to design an effective algorithm to process the defects dataset obtained from the real production line. To this end, an adaptive weight function was employed to reduce the adverse effect of noise samples. Moreover, although LSSVM offers fast speed, it still suffers from a high computational complexity if the number of training samples is large. The time for steel surface defects classification should be as short as possible. Therefore, a sparse strategy was adopted to prune the training samples. Finally, since the steel surface defects classification belongs to unbalanced data classification, LSSVM algorithm is not applicable. Hence, the unbalanced data information was introduced to improve the classification performance. Comprehensively considering above-mentioned factors, an improved LS-SVM classification model was proposed, termed as ILS-SVM. Experimental results show that the new algorithm has the advantages of high speed and great anti-noise ability. 展开更多
关键词 Steel surface defect Least squares support vector machine ANTI-NOISE sparseNESS Unbalanced data
原文传递
Pseudo Zernike Moment and Deep Stacked Sparse Autoencoder for COVID-19 Diagnosis 被引量:1
16
作者 Yu-Dong Zhang Muhammad Attique Khan +1 位作者 Ziquan Zhu Shui-Hua Wang 《Computers, Materials & Continua》 SCIE EI 2021年第12期3145-3162,共18页
(Aim)COVID-19 is an ongoing infectious disease.It has caused more than 107.45 m confirmed cases and 2.35 m deaths till 11/Feb/2021.Traditional computer vision methods have achieved promising results on the automatic s... (Aim)COVID-19 is an ongoing infectious disease.It has caused more than 107.45 m confirmed cases and 2.35 m deaths till 11/Feb/2021.Traditional computer vision methods have achieved promising results on the automatic smart diagnosis.(Method)This study aims to propose a novel deep learning method that can obtain better performance.We use the pseudo-Zernike moment(PZM),derived from Zernike moment,as the extracted features.Two settings are introducing:(i)image plane over unit circle;and(ii)image plane inside the unit circle.Afterward,we use a deep-stacked sparse autoencoder(DSSAE)as the classifier.Besides,multiple-way data augmentation is chosen to overcome overfitting.The multiple-way data augmentation is based on Gaussian noise,salt-and-pepper noise,speckle noise,horizontal and vertical shear,rotation,Gamma correction,random translation and scaling.(Results)10 runs of 10-fold cross validation shows that our PZM-DSSAE method achieves a sensitivity of 92.06%±1.54%,a specificity of 92.56%±1.06%,a precision of 92.53%±1.03%,and an accuracy of 92.31%±1.08%.Its F1 score,MCC,and FMI arrive at 92.29%±1.10%,84.64%±2.15%,and 92.29%±1.10%,respectively.The AUC of our model is 0.9576.(Conclusion)We demonstrate“image plane over unit circle”can get better results than“image plane inside a unit circle.”Besides,this proposed PZM-DSSAE model is better than eight state-of-the-art approaches. 展开更多
关键词 Pseudo Zernike moment stacked sparse autoencoder deep learning COVID-19 multiple-way data augmentation medical image analysis
在线阅读 下载PDF
Metasample-Based Robust Sparse Representation for Tumor Classification 被引量:1
17
作者 Bin Gan Chun-Hou Zheng Jin-Xing Liu 《Engineering(科研)》 2013年第5期78-83,共6页
In this paper, based on sparse representation classification and robust thought, we propose a new classifier, named MRSRC (Metasample Based Robust Sparse Representation Classificatier), for DNA microarray data classif... In this paper, based on sparse representation classification and robust thought, we propose a new classifier, named MRSRC (Metasample Based Robust Sparse Representation Classificatier), for DNA microarray data classification. Firstly, we extract Metasample from trainning sample. Secondly, a weighted matrix W is added to solve an l1-regular- ized least square problem. Finally, the testing sample is classified according to the sparsity coefficient vector of it. The experimental results on the DNA microarray data classification prove that the proposed algorithm is efficient. 展开更多
关键词 DNA MICROARRAY data sparse REPRESENTATION CLASSIFICATION MRSRC ROBUST
在线阅读 下载PDF
Meta-analysis of bivariate P values
18
作者 Mehmet Kocak 《World Journal of Meta-Analysis》 2014年第4期179-185,共7页
AIM: To propose a new meta-analysis method for bi-variate P value which account for the paired structure. METHODS: Studies that look to test two different fea-tures from the same sample gives rise to bivariate Pvalu... AIM: To propose a new meta-analysis method for bi-variate P value which account for the paired structure. METHODS: Studies that look to test two different fea-tures from the same sample gives rise to bivariate Pvalue. A relevant example of this is testing for periodici-ty as well expression from time-course gene expressionstudies. Kocak et al (2010) uses George and Mudholkar’(1983) “Difference of Two Logit-Sums” method to poolbivariate P value across independent experiments, as-suming independence within a pair. As bivariate P valueneed not to be independent within a given study, wepropose a new meta-analysis approach for pooling bi-variate P value across independent experiments, whichaccounts for potential correlation between paired P-val-ues. We compare the “Difference of Two Logit Sums”method with our novel approach in terms of their sen-sitivity and specifcity through extensive simulations by generating P value samples from most commonly used tests namely, Z test, t test, chi-square test, and F test, with varying sample sizes and correlation structure. RESULTS: The simulations results showed that our new meta-analysis approach for correlated and uncor-related bivariate P value has much more desirable sen-sitivity and specifcity features compared to the existing method, which treats each member of the paired P value as independent. We also compare these meta-analysis approaches on bivariate P value from periodici-ty and expression tests of 4936 S.Pombe genes from 10 independent time-course experiments and we showed that our new approach ranks the periodic, conserved, and cycling genes significantly higher, and detects many more periodic, “conserved” and “cycling” genes among the top 100 genes, compared to the ‘Difference of Two Logit-Sums’ method. Finally, we used our meta-analytic approach to compare the relative evidence in the association of pre-term birth with preschool wheez-ing versus pre-school asthma.CONCLUSION: The new meta-analysis method has much better sensitivity and specifc characteristics com-pared to the “Difference of Two-Logit Sums” method and it is not computationally more expensive. 展开更多
关键词 meta-analysis Bivariate P value Independent experiments Cell cycle data
暂未订购
Robust low frequency seismic bandwidth extension with a U-net and synthetic training data
19
作者 P.Zwartjes J.Yoo 《Artificial Intelligence in Geosciences》 2025年第1期33-45,共13页
This work focuses on enhancing low frequency seismic data using a convolutional neural network trained on synthetic data.Traditional seismic data often lack both high and low frequencies,which are essential for detail... This work focuses on enhancing low frequency seismic data using a convolutional neural network trained on synthetic data.Traditional seismic data often lack both high and low frequencies,which are essential for detailed geological interpretation and various geophysical applications.Low frequency data is particularly valuable for reducing wavelet sidelobes and improving full waveform inversion(FWI).Conventional methods for bandwidth extension include seismic deconvolution and sparse inversion,which have limitations in recovering low frequencies.The study explores the potential of the U-net,which has been successful in other geophysical applications such as noise attenuation and seismic resolution enhancement.The novelty in our approach is that we do not rely on computationally expensive finite difference modelling to create training data.Instead,our synthetic training data is created from individual randomly perturbed events with variations in bandwidth,making it more adaptable to different data sets compared to previous deep learning methods.The method was tested on both synthetic and real seismic data,demonstrating effective low frequency reconstruction and sidelobe reduction.With a synthetic full waveform inversion to recover a velocity model and a seismic amplitude inversion to estimate acoustic impedance we demonstrate the validity and benefit of the proposed method.Overall,the study presents a robust approach to seismic bandwidth extension using deep learning,emphasizing the importance of diverse and well-designed but computationally inexpensive synthetic training data. 展开更多
关键词 detailed geological interpretation enhancing low frequency seismic data convolutional neural network seismic deconvolution seismic data synthetic datatraditional sparse inversionwhich reducing wavelet sidelobes
在线阅读 下载PDF
Airborne electromagnetic data denoising based on dictionary learning 被引量:7
20
作者 Xue Shu-yang Yin Chang-chun +5 位作者 Su Yang Liu Yun-he Wang Yong Liu Cai-hua Xiong Bin Sun Huai-feng 《Applied Geophysics》 SCIE CSCD 2020年第2期306-313,317,共9页
Time-domain airborne electromagnetic(AEM)data are frequently subject to interference from various types of noise,which can reduce the data quality and affect data inversion and interpretation.Traditional denoising met... Time-domain airborne electromagnetic(AEM)data are frequently subject to interference from various types of noise,which can reduce the data quality and affect data inversion and interpretation.Traditional denoising methods primarily deal with data directly,without analyzing the data in detail;thus,the results are not always satisfactory.In this paper,we propose a method based on dictionary learning for EM data denoising.This method uses dictionary learning to perform feature analysis and to extract and reconstruct the true signal.In the process of dictionary learning,the random noise is fi ltered out as residuals.To verify the eff ectiveness of this dictionary learning approach for denoising,we use a fi xed overcomplete discrete cosine transform(ODCT)dictionary algorithm,the method-of-optimal-directions(MOD)dictionary learning algorithm,and the K-singular value decomposition(K-SVD)dictionary learning algorithm to denoise decay curves at single points and to denoise profi le data for diff erent time channels in time-domain AEM.The results show obvious diff erences among the three dictionaries for denoising AEM data,with the K-SVD dictionary achieving the best performance. 展开更多
关键词 Time-domain AEM data processing DENOISING dictionary learning sparse representation
在线阅读 下载PDF
上一页 1 2 36 下一页 到第
使用帮助 返回顶部