期刊文献+
共找到10,894篇文章
< 1 2 250 >
每页显示 20 50 100
Research on the Data-Driven Fast Calculation Method of the Temperature Field Distribution of Valve-Side RIP Bushing Used in UHV DC Converter Transformer
1
作者 Zehua Wu Luming Xin +5 位作者 Jianwei Cheng Baoying Wu Zeng Qiang Qingyu Wang Linjie Zhao Zhiye Du 《High Voltage》 2025年第5期1210-1220,共11页
Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment w... Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment with digital twin.In this paper,a datadriven fast calculation method for the temperature field of resin impregnated paper(RIP)bushing used in converter transformer valve-side is proposed,which combines the data dimensionality reduction technology and the surrogate model.After applying the finite element algorithm to obtain the temperature field distribution of RIP bushing under different operation conditions as the input dataset,the proper orthogonal decomposition(POD)algorithm is adopted to reduce the order and obtain the low-dimensional projection of the temperature data.On this basis,the surrogate model is used to construct the mapping relationship between the sensor monitoring data and the low-dimensional projection,so that it can achieve the fast calculation and reconstruction of temperature field distribution.The results show that this method can effectively and quickly calculate the overall temperature field distribution of the RIP bushing.The maximum relative error and the average relative error are less than 4.5%and 0.25%,respectively.The calculation speed is at the millisecond level,meeting the needs of digitalisation of power equipment. 展开更多
关键词 surrogate modelafter datadriven fast calculation method resin impregnated digital twinin power equipment data driven method computational efficiency finite e
在线阅读 下载PDF
Data credibility evaluation method for formation water in oil and gas fields and its influencing factors
2
作者 LI Wei XIE Wuren +2 位作者 WU Saijun SHUAI Yanhua MA Xingzhi 《Petroleum Exploration and Development》 2025年第2期361-376,共16页
The formation water sample in oil and gas fields may be polluted in processes of testing, trial production, collection, storage, transportation and analysis, making the properties of formation water not be reflected t... The formation water sample in oil and gas fields may be polluted in processes of testing, trial production, collection, storage, transportation and analysis, making the properties of formation water not be reflected truly. This paper discusses identification methods and the data credibility evaluation method for formation water in oil and gas fields of petroliferous basins within China. The results of the study show that: (1) the identification methods of formation water include the basic methods of single factors such as physical characteristics, water composition characteristics, water type characteristics, and characteristic coefficients, as well as the comprehensive evaluation method of data credibility proposed on this basis, which mainly relies on the correlation analysis sodium chloride coefficient and desulfurization coefficient and combines geological background evaluation;(2) The basic identifying methods for formation water enable the preliminary identification of hydrochemical data and the preliminary screening of data on site, the proposed comprehensive method realizes the evaluation by classifying the CaCl2-type water into types A-I to A-VI and the NaHCO3-type water into types B-I to B-IV, so that researchers can make in-depth evaluation on the credibility of hydrochemical data and analysis of influencing factors;(3) When the basic methods are used to identify the formation water, the formation water containing anions such as CO_(3)^(2-), OH- and NO_(3)^(-), or the formation water with the sodium chloride coefficient and desulphurization coefficient not matching the geological setting, are all invaded with surface water or polluted by working fluid;(4) When the comprehensive method is used, the data credibility of A-I, A-II, B-I and B-II formation water can be evaluated effectively and accurately only if the geological setting analysis in respect of the factors such as formation environment, sampling conditions, condensate water, acid fluid, leaching of ancient weathering crust, and ancient atmospheric fresh water, is combined, although such formation water is believed with high credibility. 展开更多
关键词 oil and gas field hydrogeology formation water hydrochemical data data credibility evaluation method hydrochemical characteristic indicator influencing factor
在线阅读 下载PDF
Optimized air-ground data fusion method for mine slope modeling
3
作者 LIU Dan HUANG Man +4 位作者 TAO Zhigang HONG Chenjie WU Yuewei FAN En YANG Fei 《Journal of Mountain Science》 SCIE CSCD 2024年第6期2130-2139,共10页
Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized charact... Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized characteristics of mining slopes,this study introduces a new method that fuses model data from Unmanned aerial vehicles(UAV)tilt photogrammetry and 3D laser scanning through a data alignment algorithm based on control points.First,the mini batch K-Medoids algorithm is utilized to cluster the point cloud data from ground 3D laser scanning.Then,the elbow rule is applied to determine the optimal cluster number(K0),and the feature points are extracted.Next,the nearest neighbor point algorithm is employed to match the feature points obtained from UAV tilt photogrammetry,and the internal point coordinates are adjusted through the distanceweighted average to construct a 3D model.Finally,by integrating an engineering case study,the K0 value is determined to be 8,with a matching accuracy between the two model datasets ranging from 0.0669 to 1.0373 mm.Therefore,compared with the modeling method utilizing K-medoids clustering algorithm,the new modeling method significantly enhances the computational efficiency,the accuracy of selecting the optimal number of feature points in 3D laser scanning,and the precision of the 3D model derived from UAV tilt photogrammetry.This method provides a research foundation for constructing mine slope model. 展开更多
关键词 Air-ground data fusion method Mini batch K-Medoids algorithm Ebow rule Optimal cluster number 3D laser scanning UAV tilt photogrammetry
原文传递
PSRDP:A Parallel Processing Method for Pulsar Baseband Data
4
作者 Ya-Zhou Zhang Hai-Long Zhang +7 位作者 Jie Wang Xin-Chen Ye Shuang-Qiang Wang Xu Du Han Wu Ting Zhang Shao-Cong Guo Meng Zhang 《Research in Astronomy and Astrophysics》 SCIE CAS CSCD 2024年第1期300-310,共11页
To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRD... To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRDP can perform operations such as baseband data unpacking,channel separation,coherent dedispersion,Stokes detection,phase and folding period prediction,and folding integration in GPU clusters.We tested the algorithm using the J0437-4715 pulsar baseband data generated by the CASPSR and Medusa backends of the Parkes,and the J0332+5434 pulsar baseband data generated by the self-developed backend of the Nan Shan Radio Telescope.We obtained the pulse profiles of each baseband data.Through experimental analysis,we have found that the pulse profiles generated by the PSRDP algorithm in this paper are essentially consistent with the processing results of Digital Signal Processing Software for Pulsar Astronomy(DSPSR),which verified the effectiveness of the PSRDP algorithm.Furthermore,using the same baseband data,we compared the processing speed of PSRDP with DSPSR,and the results showed that PSRDP was not slower than DSPSR in terms of speed.The theoretical and technical experience gained from the PSRDP algorithm research in this article lays a technical foundation for the real-time processing of QTT(Qi Tai radio Telescope)ultra-wide bandwidth pulsar baseband data. 展开更多
关键词 (stars:)pulsars:general methods:data analysis techniques:miscellaneous
在线阅读 下载PDF
Basic methods and applications of a multiphase-flow solver in fluid-body interaction problems
5
作者 Hou-sheng Zhang Biao Huang +2 位作者 Xin Zhao Jie Chen Qing-chen Dong 《Journal of Hydrodynamics》 2025年第3期514-526,共13页
This paper introduces MultiPHydro,an in-house computational solver developed for simulating hydrodynamic and multiphase fluid—body interaction problems,with a specialized focus on multiphase flow dynamics.The solver ... This paper introduces MultiPHydro,an in-house computational solver developed for simulating hydrodynamic and multiphase fluid—body interaction problems,with a specialized focus on multiphase flow dynamics.The solver employs the boundary data immersion method(BDIM)as its core numerical framework for handling fluid—solid interfaces.We briefly outline the governing equations and physical models integrated within MultiPHydro,including weakly-compressible flows,cavitation modeling,and the volume of fluid(VOF)method with piecewise-linear interface reconstruction.The solver’s accuracy and versatility are demonstrated through several numerical benchmarks:single-phase flow past a cylinder shows less than 10%error in vortex shedding frequency and under 4%error in hydrodynamic resistance;cavitating flows around a hydrofoil yield errors below 7%in maximum cavity length;water-entry cases exhibit under 5%error in displacement and velocity;and water-exit simulations predict cavity length within 7.2%deviation.These results confirm the solver’s capability to reliably model complex fluid-body interactions across various regimes.Future developments will focus on refining mathematical models,improving the modeling of phase-interaction mechanisms,and implementing GPU-accelerated parallel algorithms to enhance compatibility with domestically-developed operating systems and deep computing units(DCUs). 展开更多
关键词 MultiPHydro boundary data immersion method in-house code multiphase flows fluid-solid interaction
原文传递
Data-Enhanced Low-Cycle Fatigue Life Prediction Model Based on Nickel-Based Superalloys
6
作者 Luopeng Xu Lei Xiong +5 位作者 Rulun Zhang Jiajun Zheng Huawei Zou Zhixin Li Xiaopeng Wang Qingyuan Wang 《Acta Mechanica Solida Sinica》 2025年第4期612-623,共12页
To overcome the challenges of limited experimental data and improve the accuracy of empirical formulas,we propose a low-cycle fatigue(LCF)life prediction model for nickel-based superalloys using a data augmentation me... To overcome the challenges of limited experimental data and improve the accuracy of empirical formulas,we propose a low-cycle fatigue(LCF)life prediction model for nickel-based superalloys using a data augmentation method.This method utilizes a variational autoencoder(VAE)to generate low-cycle fatigue data and form an augmented dataset.The Pearson correlation coefficient(PCC)is employed to verify the similarity of feature distributions between the original and augmented datasets.Six machine learning models,namely random forest(RF),artificial neural network(ANN),support vector machine(SVM),gradient-boosted decision tree(GBDT),eXtreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost),are utilized to predict the LCF life of nickel-based superalloys.Results indicate that the proposed data augmentation method based on VAE can effectively expand the dataset,and the mean absolute error(MAE),root mean square error(RMSE),and R-squared(R^(2))values achieved using the CatBoost model,with respective values of 0.0242,0.0391,and 0.9538,are superior to those of the other models.The proposed method reduces the cost and time associated with LCF experiments and accurately establishes the relationship between fatigue characteristics and LCF life of nickel-based superalloys. 展开更多
关键词 Nickel-based superalloy Low-cycle fatigue(LCF) Fatigue life prediction data augmentation method Machine learning model Variational autoencoder(VAE)
原文传递
Determining the Observational Epoch of Shi's Star Catalog Using the Generalized Hough Transform Method
7
作者 Boliang He Yongheng Zhao 《Research in Astronomy and Astrophysics》 2025年第7期70-78,共9页
Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces c... Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces controversy regarding its observational epoch.Determining this epoch via precession assumes accurate ancient coordinates and correspondence with contemporary stars,posing significant challenges.This study introduces a novel method using the Generalized Hough Transform to ascertain the catalog’s observational epoch.This approach statistically accommodates errors in ancient coordinates and discrepancies between ancient and modern stars,addressing limitations in prior methods.Our findings date Shi’s Star Catalog to the 4th century BCE,with 2nd-century CE adjustments.In comparison,the Western tradition’s oldest known catalog,the Ptolemaic Star Catalog(2nd century CE),likely derives from the Hipparchus Star Catalog(2nd century BCE).Thus,Shi’s Star Catalog is identified as the world’s oldest known star catalog.Beyond establishing its observation period,this study aims to consolidate and digitize these cultural artifacts. 展开更多
关键词 history and philosophy of astronomy catalogs methods:data analysis
在线阅读 下载PDF
A Combined Smoothing Method of Ensemble Pulsar Timescale and Application for Pulsar Atomic Clock Combined Timescale
8
作者 Tinggao Yang Minglei Tong +3 位作者 Bian Li Zhehao Zhang Xingzhi Zhu Yuping Gao 《Research in Astronomy and Astrophysics》 2025年第2期80-92,共13页
Clock difference between the ensemble pulsar timescale(PT)and the International Atomic Time(TAI)PT-TAI derived from the International Pulsar Timing Array(IPTA)data set indicates a very similar variation trend with the... Clock difference between the ensemble pulsar timescale(PT)and the International Atomic Time(TAI)PT-TAI derived from the International Pulsar Timing Array(IPTA)data set indicates a very similar variation trend with the Terrestrial Time TT(BIPMXXXX)-TAI but PT has larger measurement error.In this paper,we discuss the smoothing method of PT using a combined smoothing filter and compare the results with that from other filters.The clock difference sequence between PT-TAI and the first time derivative series of the TT(BIPMXXXX)-TAI can be combined by a combined smoothing filter to yield two smooth curves tied by the constraints assuring that the latter is the derivative of the former.The ensemble pulsar time IPTA2016 with respect to TAI published by G.Hobbs et al.and first time derivative series of the TT(BIPM2017)-TAI with quadratic polynomial terms removed are processed by combined smoothing filter in order to demonstrate the properties of the smoothed results.How to correctly estimate two smoothing coefficients is described and the output results of the combined smoothing filter are analyzed.The results show that the combined smoothing method efficiently removes high frequency noises of two input data series and the smoothed data of the PT-TAI combine long term fractional frequency stability of the pulsar time and frequency accuracy of the terrestrial time.Fractional frequency stability analysis indicates that both short and medium time interval stability of the smoothed PT-TAI is improved while keeping its original long term frequency stability level.The combined smoothing filter is more suitable for smoothing observational pulsar timescale data than any filter that only performs smoothing of a single pulsar time series.The smoothed pulsar time by combined smoothing filter is a pulsar atomic time combined timescale.This kind of combined timescale can also be used as terrestrial time. 展开更多
关键词 methods:data analysis time (stars:)pulsars:general
在线阅读 下载PDF
BYSpec:An Automatic Data Reduction Package for BFOSC and YFOSC Spectroscopic Data
9
作者 Zi-Chong Zhang Jun-Bo Zhang +6 位作者 Ju-Jia Zhang De-Yang Song Jing Chen Ming-Yi Ding Nan Zhou Liang Wang Kai Zhang 《Research in Astronomy and Astrophysics》 2025年第2期182-196,共15页
BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Pack... BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Package)dedicated to automatically reducing the long-slit and echelle spectra obtained by these two instruments.The package supports bias and flat-fielding correction,order location,background subtraction,automatic wavelength calibration,and absolute flux calibration.The optimal extraction method maximizes the signal-to-noise ratio and removes most of the cosmic rays imprinted in the spectra.A comparison with the 1D spectra reduced with IRAF verifies the reliability of the results.This open-source software is publicly available to the community. 展开更多
关键词 methods:data analysis instrumentation:spectrographs techniques:image processing
在线阅读 下载PDF
Detecting the Lunar Wrinkle Ridges Through Deep Learning Based on DEM and Aspect Data
10
作者 Xin Lu Jiacheng Sun +2 位作者 Gaofeng Shu Jianhui Zhao Ning Li 《Research in Astronomy and Astrophysics》 2025年第8期167-179,共13页
Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are... Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are key factors influencing future lunar activity, such as the choice of landing sites. However, automatic extraction of lunar wrinkle ridges is a challenging task due to their complex morphology and ambiguous features. Traditional manual extraction methods are time-consuming and labor-intensive. To achieve automated and detailed detection of lunar wrinkle ridges, we have constructed a lunar wrinkle ridge data set, incorporating previously unused aspect data to provide edge information, and proposed a Dual-Branch Ridge Detection Network(DBR-Net) based on deep learning technology. This method employs a dual-branch architecture and an Attention Complementary Feature Fusion module to address the issue of insufficient lunar wrinkle ridge features. Through comparisons with the results of various deep learning approaches, it is demonstrated that the proposed method exhibits superior detection performance. Furthermore, the trained model was applied to lunar mare regions, generating a distribution map of lunar mare wrinkle ridges;a significant linear relationship between the length and area of the lunar wrinkle ridges was obtained through statistical analysis, and six previously unrecorded potential lunar wrinkle ridges were detected. The proposed method upgrades the automated extraction of lunar wrinkle ridges to a pixel-level precision and verifies the effectiveness of DBR-Net in lunar wrinkle ridge detection. 展开更多
关键词 MOON methods:data analysis planets and satellites:surfaces techniques:image processing
在线阅读 下载PDF
Noise Reduction Method for Radio Astronomy Single Station Observation Based on Wavelet Transform and Mathematical Morphology
11
作者 Ming-Wei Qin Rui Tang +8 位作者 Ying-Hui Zhou Chang-Jun Lan Wen-Hao Fu Huan Wang Bao-Lin Hou Zamri Bin Zainal Abidin Jin-Song Ping Wen-Jun Yang Liang Dong 《Research in Astronomy and Astrophysics》 2025年第7期148-166,共19页
The 21 cm radiation of neutral hydrogen provides crucial information for studying the early universe and its evolution.To advance this research,countries have made significant investments in constructing large lowfreq... The 21 cm radiation of neutral hydrogen provides crucial information for studying the early universe and its evolution.To advance this research,countries have made significant investments in constructing large lowfrequency radio telescope arrays,such as the Low Frequency Array and the Square Kilometre Array Phase 1 Low Frequency.These instruments are pivotal for radio astronomy research.However,challenges such as ionospheric plasma interference,ambient radio noise,and instrument-related effects have become increasingly prominent,posing major obstacles in cosmology research.To address these issues,this paper proposes an efficient signal processing method that combines wavelet transform and mathematical morphology.The method involves the following steps:Background Subtraction:Background interference in radio observation signals is eliminated.Wavelet Transform:The signal,after removing background noise,undergoes a two-dimensional discrete wavelet transform.Threshold processing is then applied to the wavelet coefficients to effectively remove interference components.Wavelet Inversion:The processed signal is reconstructed using wavelet inversion.Mathematical Morphology:The reconstructed signal is further optimized using mathematical morphology to refine the results.Experimental verification was conducted using solar observation data from the Xinjiang Observatory and the Yunnan Observatory.The results demonstrate that this method successfully removes interference signals while preserving useful signals,thus improving the accuracy of radio astronomy observations and reducing the impact of radio frequency interference. 展开更多
关键词 methods:data analysis techniques:image processing techniques:spectroscopic Sun:radio radiation methods:numerical
在线阅读 下载PDF
A Study of EM Algorithm as an Imputation Method: A Model-Based Simulation Study with Application to a Synthetic Compositional Data
12
作者 Yisa Adeniyi Abolade Yichuan Zhao 《Open Journal of Modelling and Simulation》 2024年第2期33-42,共10页
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode... Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance. 展开更多
关键词 Compositional data Linear Regression Model Least Square method Robust Least Square method Synthetic data Aitchison Distance Maximum Likelihood Estimation Expectation-Maximization Algorithm k-Nearest Neighbor and Mean imputation
在线阅读 下载PDF
Test for random in electrical signals time series of CO_2 short circuit transition welding process by the method of surrogate data 被引量:1
13
作者 王莹 吕小青 王立君 《China Welding》 EI CAS 2016年第1期21-29,共9页
This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Ros... This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Rossler data were used to show the availability and effectivity of this method. According to the analysis by this method based on the short-circuiting current signals under the conditions of the same voltage and different wire feed speeds, it is demonstrated that the electrical signals time series exhibit apparently randomness when the welding parameters do not match. However, the electrical signals time series are deterministic when a match is found. The stability of short-circuiting transfer process could be judged exactly by the method of surrogate data. 展开更多
关键词 CO2 welding surrogate data method deterministic and stochastic analysis short-circuiting transfer STABILITY
在线阅读 下载PDF
STUDY ON THE ADJOINT METHOD IN DATA ASSIMILATION AND THE RELATED PROBLEMS 被引量:8
14
作者 吕咸青 吴自库 +1 位作者 谷艺 田纪伟 《应用数学和力学》 EI CSCD 北大核心 2004年第6期581-590,共10页
It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that... It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention. 展开更多
关键词 数据同化 变分分析 伴随方法 潮汐 开边界条件
在线阅读 下载PDF
Research on Component Law of Chinese Patent Medicine for Anti-influenza and Development of New Recipes for Anti-influenza by Unsupervised Data Mining Methods 被引量:17
15
作者 唐仕欢 陈建新 +6 位作者 李耿 吴宏伟 陈畅 张娜 高娜 杨洪军 黄璐琦 《Journal of Traditional Chinese Medicine》 SCIE CAS CSCD 2010年第4期288-293,共6页
Objective:To analyze the component law of Chinese patent medicines for anti-influenza and develop new prescriptions for anti-influenza by unsupervised data mining methods. Methods: Chinese patent medicine recipes for ... Objective:To analyze the component law of Chinese patent medicines for anti-influenza and develop new prescriptions for anti-influenza by unsupervised data mining methods. Methods: Chinese patent medicine recipes for anti-influenza were collected and recorded in the database, and then the correlation coefficient between herbs, core combinations of herbs and new prescriptions were analyzed by using modified mutual information, complex system entropy cluster and unsupervised hierarchical clustering, respectively. Results: Based on analysis of 126 Chinese patent medicine recipes, the frequency of each herb occurrence in these recipes, 54 frequently-used herb pairs, 34 core combinations were determined, and 4 new recipes for influenza were developed. Conclusion: Unsupervised data mining methods are able to mine the component law quickly and develop new prescriptions. 展开更多
关键词 INFLUENZA unsupervised data mining methods swine influenza new prescription discovery
原文传递
Application of a Bayesian method to data-poor stock assessment by using Indian Ocean albacore (Thunnus alalunga) stock assessment as an example 被引量:15
16
作者 GUAN Wenjiang TANG Lin +2 位作者 ZHU Jiangfeng TIAN Siquan XU Liuxiong 《Acta Oceanologica Sinica》 SCIE CAS CSCD 2016年第2期117-125,共9页
It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in dat... It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in data-poor situations through borrowing strength from prior information deduced from species with good-quality data or other known information. Because there is considerable uncertainty remaining in the stock assessment of albacore tuna(Thunnus alalunga) in the Indian Ocean due to the limited and low-quality data, we investigate the advantages of a Bayesian method in data-poor stock assessment by using Indian Ocean albacore stock assessment as an example. Eight Bayesian biomass dynamics models with different prior assumptions and catch data series were developed to assess the stock. The results show(1) the rationality of choice of catch data series and assumption of parameters could be enhanced by analyzing the posterior distribution of the parameters;(2) the reliability of the stock assessment could be improved by using demographic methods to construct a prior for the intrinsic rate of increase(r). Because we can make use of more information to improve the rationality of parameter estimation and the reliability of the stock assessment compared with traditional statistical methods by incorporating any available knowledge into the informative priors and analyzing the posterior distribution based on Bayesian framework in data-poor situations, we suggest that the Bayesian method should be an alternative method to be applied in data-poor species stock assessment, such as Indian Ocean albacore. 展开更多
关键词 data-poor stock assessment Bayesian method catch data series demographic method Indian Ocean Thunnus alalunga
在线阅读 下载PDF
An improved data space inversion method to predict reservoir state fields via observed production data 被引量:2
17
作者 Deng Liu Xiang Rao +2 位作者 Hui Zhao Yun-Feng Xu Ru-Xiang Gong 《Petroleum Science》 SCIE CAS CSCD 2021年第4期1127-1142,共16页
A data-space inversion(DSI)method has been recently proposed and successfully applied to the history matching and production prediction of reservoirs.Based on Bayesian theory,DSI can directly and effectively obtain go... A data-space inversion(DSI)method has been recently proposed and successfully applied to the history matching and production prediction of reservoirs.Based on Bayesian theory,DSI can directly and effectively obtain good posterior flow predictions without inversion of geological parameters of reservoir model.This paper presents an improved DSI method to fast predict reservoir state fields(e.g.saturation and pressure profiles)via observed production data.Firstly,a large number of production curves and state data are generated by reservoir model simulation to expand the data space of original DSI.Then,efficient history matching only on the observed production data is carried out via the original DSI to obtain related parameters which reflects the weight of the real reservoir model relative to prior reservoir models.Finally,those parameters are used to predict the oil saturation and pressure profiles of the real reservoir model by combining large amounts of state data of prior reservoir models.Two examples including conventional heterogeneous and unconventional fractured reservoir are implemented to test the performances of predicting saturation and pressure profiles of this improved DSI method.Besides,this method is also tested in a real field and the obtained results show the high computational efficiency and high accuracy of the practical application of this method. 展开更多
关键词 Fossil fuels Oil and gas reservoirs Reservoir state fields Production data data inversion method
原文传递
Comparative evaluation of data mining methods in predicting the water vapor permeability of cement-based materials 被引量:1
18
作者 Xianqi Huang Ruijin Ma +2 位作者 Hanyu Yang Chi Feng Kun Li 《Building Simulation》 SCIE EI CSCD 2023年第6期853-867,共15页
Water vapor permeability of building materials is a crucial parameter for analysing and optimizing the hygrothermal performance of building envelopes and built environments.Its measurement is accurate but time-consumi... Water vapor permeability of building materials is a crucial parameter for analysing and optimizing the hygrothermal performance of building envelopes and built environments.Its measurement is accurate but time-consuming,while data mining methods have the potential to predict water vapor permeability efficiently.In this study,six data mining methods—support vector regression(SVR),decision tree regression(DT),random forest regression(RF),K-nearest neighbor(KNN),multi-layer perceptron(MLP),and adaptive boosting regression(AdaBoost)—were compared to predict the water vapor permeability of cement-based materials.A total of 143 datasets of material properties were collected to build prediction models,and five materials were experimentally determined for model validation.The results show that RF has excellent generalization,stability,and precision.AdaBoost has great generalization and precision,only slightly inferior to the former,and its stability is excellent.DT has good precision and acceptable generalization,but its stability is poor.SVR and KNN have superior stability,but their generalization and precision are inadequate.MLP lacks generalization,and its stability and precision are unacceptable.In short,RF has the best comprehensive performance,demonstrated by a limited prediction deviation of 26.3%from the experimental results,better than AdaBoost(38.0%)and DT(38.3%)and far better than other remaining methods.It is also found that data mining methods provide better predictions when cement-based materials’water vapor permeability is high. 展开更多
关键词 data mining method cement-based material water vapor permeability cross-validation experimental determination
原文传递
Power Data Preprocessing Method of Mountain Wind Farm Based on POT-DBSCAN 被引量:1
19
作者 Anfeng Zhu Zhao Xiao Qiancheng Zhao 《Energy Engineering》 EI 2021年第3期549-563,共15页
Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which co... Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which combines POT with DBSCAN(POT-DBSCAN)to improve the prediction efficiency of wind power prediction model.Firstly,according to the data of WT in the normal operation condition,the power prediction model ofWT is established based on the Particle Swarm Optimization(PSO)Arithmetic which is combined with the BP Neural Network(PSO-BP).Secondly,the wind-power data obtained from the supervisory control and data acquisition(SCADA)system is preprocessed by the POT-DBSCAN method.Then,the power prediction of the preprocessed data is carried out by PSO-BP model.Finally,the necessity of preprocessing is verified by the indexes.This case analysis shows that the prediction result of POT-DBSCAN preprocessing is better than that of the Quartile method.Therefore,the accuracy of data and prediction model can be improved by using this method. 展开更多
关键词 Wind turbine SCADA data data preprocessing method power prediction
在线阅读 下载PDF
Fuzzy norm method for evaluating random vibration of airborne platform from limited PSD data 被引量:7
20
作者 Wang Zhongyu Wang Yanqing +1 位作者 Wang Qian Zhang Jianjun 《Chinese Journal of Aeronautics》 SCIE EI CAS CSCD 2014年第6期1442-1450,共9页
For random vibration of airborne platform, the accurate evaluation is a key indicator to ensure normal operation of airborne equipment in flight. However, only limited power spectral density(PSD) data can be obtaine... For random vibration of airborne platform, the accurate evaluation is a key indicator to ensure normal operation of airborne equipment in flight. However, only limited power spectral density(PSD) data can be obtained at the stage of flight test. Thus, those conventional evaluation methods cannot be employed when the distribution characteristics and priori information are unknown. In this paper, the fuzzy norm method(FNM) is proposed which combines the advantages of fuzzy theory and norm theory. The proposed method can deeply dig system information from limited data, which probability distribution is not taken into account. Firstly, the FNM is employed to evaluate variable interval and expanded uncertainty from limited PSD data, and the performance of FNM is demonstrated by confidence level, reliability and computing accuracy of expanded uncertainty. In addition, the optimal fuzzy parameters are discussed to meet the requirements of aviation standards and metrological practice. Finally, computer simulation is used to prove the adaptability of FNM. Compared with statistical methods, FNM has superiority for evaluating expanded uncertainty from limited data. The results show that the reliability of calculation and evaluation is superior to 95%. 展开更多
关键词 Expanded uncertainty Fuzzy norm method Limited PSD data Random vibration Reliability Variable interval
原文传递
上一页 1 2 250 下一页 到第
使用帮助 返回顶部