Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment w...Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment with digital twin.In this paper,a datadriven fast calculation method for the temperature field of resin impregnated paper(RIP)bushing used in converter transformer valve-side is proposed,which combines the data dimensionality reduction technology and the surrogate model.After applying the finite element algorithm to obtain the temperature field distribution of RIP bushing under different operation conditions as the input dataset,the proper orthogonal decomposition(POD)algorithm is adopted to reduce the order and obtain the low-dimensional projection of the temperature data.On this basis,the surrogate model is used to construct the mapping relationship between the sensor monitoring data and the low-dimensional projection,so that it can achieve the fast calculation and reconstruction of temperature field distribution.The results show that this method can effectively and quickly calculate the overall temperature field distribution of the RIP bushing.The maximum relative error and the average relative error are less than 4.5%and 0.25%,respectively.The calculation speed is at the millisecond level,meeting the needs of digitalisation of power equipment.展开更多
The formation water sample in oil and gas fields may be polluted in processes of testing, trial production, collection, storage, transportation and analysis, making the properties of formation water not be reflected t...The formation water sample in oil and gas fields may be polluted in processes of testing, trial production, collection, storage, transportation and analysis, making the properties of formation water not be reflected truly. This paper discusses identification methods and the data credibility evaluation method for formation water in oil and gas fields of petroliferous basins within China. The results of the study show that: (1) the identification methods of formation water include the basic methods of single factors such as physical characteristics, water composition characteristics, water type characteristics, and characteristic coefficients, as well as the comprehensive evaluation method of data credibility proposed on this basis, which mainly relies on the correlation analysis sodium chloride coefficient and desulfurization coefficient and combines geological background evaluation;(2) The basic identifying methods for formation water enable the preliminary identification of hydrochemical data and the preliminary screening of data on site, the proposed comprehensive method realizes the evaluation by classifying the CaCl2-type water into types A-I to A-VI and the NaHCO3-type water into types B-I to B-IV, so that researchers can make in-depth evaluation on the credibility of hydrochemical data and analysis of influencing factors;(3) When the basic methods are used to identify the formation water, the formation water containing anions such as CO_(3)^(2-), OH- and NO_(3)^(-), or the formation water with the sodium chloride coefficient and desulphurization coefficient not matching the geological setting, are all invaded with surface water or polluted by working fluid;(4) When the comprehensive method is used, the data credibility of A-I, A-II, B-I and B-II formation water can be evaluated effectively and accurately only if the geological setting analysis in respect of the factors such as formation environment, sampling conditions, condensate water, acid fluid, leaching of ancient weathering crust, and ancient atmospheric fresh water, is combined, although such formation water is believed with high credibility.展开更多
Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized charact...Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized characteristics of mining slopes,this study introduces a new method that fuses model data from Unmanned aerial vehicles(UAV)tilt photogrammetry and 3D laser scanning through a data alignment algorithm based on control points.First,the mini batch K-Medoids algorithm is utilized to cluster the point cloud data from ground 3D laser scanning.Then,the elbow rule is applied to determine the optimal cluster number(K0),and the feature points are extracted.Next,the nearest neighbor point algorithm is employed to match the feature points obtained from UAV tilt photogrammetry,and the internal point coordinates are adjusted through the distanceweighted average to construct a 3D model.Finally,by integrating an engineering case study,the K0 value is determined to be 8,with a matching accuracy between the two model datasets ranging from 0.0669 to 1.0373 mm.Therefore,compared with the modeling method utilizing K-medoids clustering algorithm,the new modeling method significantly enhances the computational efficiency,the accuracy of selecting the optimal number of feature points in 3D laser scanning,and the precision of the 3D model derived from UAV tilt photogrammetry.This method provides a research foundation for constructing mine slope model.展开更多
To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRD...To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRDP can perform operations such as baseband data unpacking,channel separation,coherent dedispersion,Stokes detection,phase and folding period prediction,and folding integration in GPU clusters.We tested the algorithm using the J0437-4715 pulsar baseband data generated by the CASPSR and Medusa backends of the Parkes,and the J0332+5434 pulsar baseband data generated by the self-developed backend of the Nan Shan Radio Telescope.We obtained the pulse profiles of each baseband data.Through experimental analysis,we have found that the pulse profiles generated by the PSRDP algorithm in this paper are essentially consistent with the processing results of Digital Signal Processing Software for Pulsar Astronomy(DSPSR),which verified the effectiveness of the PSRDP algorithm.Furthermore,using the same baseband data,we compared the processing speed of PSRDP with DSPSR,and the results showed that PSRDP was not slower than DSPSR in terms of speed.The theoretical and technical experience gained from the PSRDP algorithm research in this article lays a technical foundation for the real-time processing of QTT(Qi Tai radio Telescope)ultra-wide bandwidth pulsar baseband data.展开更多
This paper introduces MultiPHydro,an in-house computational solver developed for simulating hydrodynamic and multiphase fluid—body interaction problems,with a specialized focus on multiphase flow dynamics.The solver ...This paper introduces MultiPHydro,an in-house computational solver developed for simulating hydrodynamic and multiphase fluid—body interaction problems,with a specialized focus on multiphase flow dynamics.The solver employs the boundary data immersion method(BDIM)as its core numerical framework for handling fluid—solid interfaces.We briefly outline the governing equations and physical models integrated within MultiPHydro,including weakly-compressible flows,cavitation modeling,and the volume of fluid(VOF)method with piecewise-linear interface reconstruction.The solver’s accuracy and versatility are demonstrated through several numerical benchmarks:single-phase flow past a cylinder shows less than 10%error in vortex shedding frequency and under 4%error in hydrodynamic resistance;cavitating flows around a hydrofoil yield errors below 7%in maximum cavity length;water-entry cases exhibit under 5%error in displacement and velocity;and water-exit simulations predict cavity length within 7.2%deviation.These results confirm the solver’s capability to reliably model complex fluid-body interactions across various regimes.Future developments will focus on refining mathematical models,improving the modeling of phase-interaction mechanisms,and implementing GPU-accelerated parallel algorithms to enhance compatibility with domestically-developed operating systems and deep computing units(DCUs).展开更多
To overcome the challenges of limited experimental data and improve the accuracy of empirical formulas,we propose a low-cycle fatigue(LCF)life prediction model for nickel-based superalloys using a data augmentation me...To overcome the challenges of limited experimental data and improve the accuracy of empirical formulas,we propose a low-cycle fatigue(LCF)life prediction model for nickel-based superalloys using a data augmentation method.This method utilizes a variational autoencoder(VAE)to generate low-cycle fatigue data and form an augmented dataset.The Pearson correlation coefficient(PCC)is employed to verify the similarity of feature distributions between the original and augmented datasets.Six machine learning models,namely random forest(RF),artificial neural network(ANN),support vector machine(SVM),gradient-boosted decision tree(GBDT),eXtreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost),are utilized to predict the LCF life of nickel-based superalloys.Results indicate that the proposed data augmentation method based on VAE can effectively expand the dataset,and the mean absolute error(MAE),root mean square error(RMSE),and R-squared(R^(2))values achieved using the CatBoost model,with respective values of 0.0242,0.0391,and 0.9538,are superior to those of the other models.The proposed method reduces the cost and time associated with LCF experiments and accurately establishes the relationship between fatigue characteristics and LCF life of nickel-based superalloys.展开更多
Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces c...Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces controversy regarding its observational epoch.Determining this epoch via precession assumes accurate ancient coordinates and correspondence with contemporary stars,posing significant challenges.This study introduces a novel method using the Generalized Hough Transform to ascertain the catalog’s observational epoch.This approach statistically accommodates errors in ancient coordinates and discrepancies between ancient and modern stars,addressing limitations in prior methods.Our findings date Shi’s Star Catalog to the 4th century BCE,with 2nd-century CE adjustments.In comparison,the Western tradition’s oldest known catalog,the Ptolemaic Star Catalog(2nd century CE),likely derives from the Hipparchus Star Catalog(2nd century BCE).Thus,Shi’s Star Catalog is identified as the world’s oldest known star catalog.Beyond establishing its observation period,this study aims to consolidate and digitize these cultural artifacts.展开更多
Clock difference between the ensemble pulsar timescale(PT)and the International Atomic Time(TAI)PT-TAI derived from the International Pulsar Timing Array(IPTA)data set indicates a very similar variation trend with the...Clock difference between the ensemble pulsar timescale(PT)and the International Atomic Time(TAI)PT-TAI derived from the International Pulsar Timing Array(IPTA)data set indicates a very similar variation trend with the Terrestrial Time TT(BIPMXXXX)-TAI but PT has larger measurement error.In this paper,we discuss the smoothing method of PT using a combined smoothing filter and compare the results with that from other filters.The clock difference sequence between PT-TAI and the first time derivative series of the TT(BIPMXXXX)-TAI can be combined by a combined smoothing filter to yield two smooth curves tied by the constraints assuring that the latter is the derivative of the former.The ensemble pulsar time IPTA2016 with respect to TAI published by G.Hobbs et al.and first time derivative series of the TT(BIPM2017)-TAI with quadratic polynomial terms removed are processed by combined smoothing filter in order to demonstrate the properties of the smoothed results.How to correctly estimate two smoothing coefficients is described and the output results of the combined smoothing filter are analyzed.The results show that the combined smoothing method efficiently removes high frequency noises of two input data series and the smoothed data of the PT-TAI combine long term fractional frequency stability of the pulsar time and frequency accuracy of the terrestrial time.Fractional frequency stability analysis indicates that both short and medium time interval stability of the smoothed PT-TAI is improved while keeping its original long term frequency stability level.The combined smoothing filter is more suitable for smoothing observational pulsar timescale data than any filter that only performs smoothing of a single pulsar time series.The smoothed pulsar time by combined smoothing filter is a pulsar atomic time combined timescale.This kind of combined timescale can also be used as terrestrial time.展开更多
BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Pack...BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Package)dedicated to automatically reducing the long-slit and echelle spectra obtained by these two instruments.The package supports bias and flat-fielding correction,order location,background subtraction,automatic wavelength calibration,and absolute flux calibration.The optimal extraction method maximizes the signal-to-noise ratio and removes most of the cosmic rays imprinted in the spectra.A comparison with the 1D spectra reduced with IRAF verifies the reliability of the results.This open-source software is publicly available to the community.展开更多
Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are...Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are key factors influencing future lunar activity, such as the choice of landing sites. However, automatic extraction of lunar wrinkle ridges is a challenging task due to their complex morphology and ambiguous features. Traditional manual extraction methods are time-consuming and labor-intensive. To achieve automated and detailed detection of lunar wrinkle ridges, we have constructed a lunar wrinkle ridge data set, incorporating previously unused aspect data to provide edge information, and proposed a Dual-Branch Ridge Detection Network(DBR-Net) based on deep learning technology. This method employs a dual-branch architecture and an Attention Complementary Feature Fusion module to address the issue of insufficient lunar wrinkle ridge features. Through comparisons with the results of various deep learning approaches, it is demonstrated that the proposed method exhibits superior detection performance. Furthermore, the trained model was applied to lunar mare regions, generating a distribution map of lunar mare wrinkle ridges;a significant linear relationship between the length and area of the lunar wrinkle ridges was obtained through statistical analysis, and six previously unrecorded potential lunar wrinkle ridges were detected. The proposed method upgrades the automated extraction of lunar wrinkle ridges to a pixel-level precision and verifies the effectiveness of DBR-Net in lunar wrinkle ridge detection.展开更多
The 21 cm radiation of neutral hydrogen provides crucial information for studying the early universe and its evolution.To advance this research,countries have made significant investments in constructing large lowfreq...The 21 cm radiation of neutral hydrogen provides crucial information for studying the early universe and its evolution.To advance this research,countries have made significant investments in constructing large lowfrequency radio telescope arrays,such as the Low Frequency Array and the Square Kilometre Array Phase 1 Low Frequency.These instruments are pivotal for radio astronomy research.However,challenges such as ionospheric plasma interference,ambient radio noise,and instrument-related effects have become increasingly prominent,posing major obstacles in cosmology research.To address these issues,this paper proposes an efficient signal processing method that combines wavelet transform and mathematical morphology.The method involves the following steps:Background Subtraction:Background interference in radio observation signals is eliminated.Wavelet Transform:The signal,after removing background noise,undergoes a two-dimensional discrete wavelet transform.Threshold processing is then applied to the wavelet coefficients to effectively remove interference components.Wavelet Inversion:The processed signal is reconstructed using wavelet inversion.Mathematical Morphology:The reconstructed signal is further optimized using mathematical morphology to refine the results.Experimental verification was conducted using solar observation data from the Xinjiang Observatory and the Yunnan Observatory.The results demonstrate that this method successfully removes interference signals while preserving useful signals,thus improving the accuracy of radio astronomy observations and reducing the impact of radio frequency interference.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Ros...This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Rossler data were used to show the availability and effectivity of this method. According to the analysis by this method based on the short-circuiting current signals under the conditions of the same voltage and different wire feed speeds, it is demonstrated that the electrical signals time series exhibit apparently randomness when the welding parameters do not match. However, the electrical signals time series are deterministic when a match is found. The stability of short-circuiting transfer process could be judged exactly by the method of surrogate data.展开更多
It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that...It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention.展开更多
Objective:To analyze the component law of Chinese patent medicines for anti-influenza and develop new prescriptions for anti-influenza by unsupervised data mining methods. Methods: Chinese patent medicine recipes for ...Objective:To analyze the component law of Chinese patent medicines for anti-influenza and develop new prescriptions for anti-influenza by unsupervised data mining methods. Methods: Chinese patent medicine recipes for anti-influenza were collected and recorded in the database, and then the correlation coefficient between herbs, core combinations of herbs and new prescriptions were analyzed by using modified mutual information, complex system entropy cluster and unsupervised hierarchical clustering, respectively. Results: Based on analysis of 126 Chinese patent medicine recipes, the frequency of each herb occurrence in these recipes, 54 frequently-used herb pairs, 34 core combinations were determined, and 4 new recipes for influenza were developed. Conclusion: Unsupervised data mining methods are able to mine the component law quickly and develop new prescriptions.展开更多
It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in dat...It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in data-poor situations through borrowing strength from prior information deduced from species with good-quality data or other known information. Because there is considerable uncertainty remaining in the stock assessment of albacore tuna(Thunnus alalunga) in the Indian Ocean due to the limited and low-quality data, we investigate the advantages of a Bayesian method in data-poor stock assessment by using Indian Ocean albacore stock assessment as an example. Eight Bayesian biomass dynamics models with different prior assumptions and catch data series were developed to assess the stock. The results show(1) the rationality of choice of catch data series and assumption of parameters could be enhanced by analyzing the posterior distribution of the parameters;(2) the reliability of the stock assessment could be improved by using demographic methods to construct a prior for the intrinsic rate of increase(r). Because we can make use of more information to improve the rationality of parameter estimation and the reliability of the stock assessment compared with traditional statistical methods by incorporating any available knowledge into the informative priors and analyzing the posterior distribution based on Bayesian framework in data-poor situations, we suggest that the Bayesian method should be an alternative method to be applied in data-poor species stock assessment, such as Indian Ocean albacore.展开更多
A data-space inversion(DSI)method has been recently proposed and successfully applied to the history matching and production prediction of reservoirs.Based on Bayesian theory,DSI can directly and effectively obtain go...A data-space inversion(DSI)method has been recently proposed and successfully applied to the history matching and production prediction of reservoirs.Based on Bayesian theory,DSI can directly and effectively obtain good posterior flow predictions without inversion of geological parameters of reservoir model.This paper presents an improved DSI method to fast predict reservoir state fields(e.g.saturation and pressure profiles)via observed production data.Firstly,a large number of production curves and state data are generated by reservoir model simulation to expand the data space of original DSI.Then,efficient history matching only on the observed production data is carried out via the original DSI to obtain related parameters which reflects the weight of the real reservoir model relative to prior reservoir models.Finally,those parameters are used to predict the oil saturation and pressure profiles of the real reservoir model by combining large amounts of state data of prior reservoir models.Two examples including conventional heterogeneous and unconventional fractured reservoir are implemented to test the performances of predicting saturation and pressure profiles of this improved DSI method.Besides,this method is also tested in a real field and the obtained results show the high computational efficiency and high accuracy of the practical application of this method.展开更多
Water vapor permeability of building materials is a crucial parameter for analysing and optimizing the hygrothermal performance of building envelopes and built environments.Its measurement is accurate but time-consumi...Water vapor permeability of building materials is a crucial parameter for analysing and optimizing the hygrothermal performance of building envelopes and built environments.Its measurement is accurate but time-consuming,while data mining methods have the potential to predict water vapor permeability efficiently.In this study,six data mining methods—support vector regression(SVR),decision tree regression(DT),random forest regression(RF),K-nearest neighbor(KNN),multi-layer perceptron(MLP),and adaptive boosting regression(AdaBoost)—were compared to predict the water vapor permeability of cement-based materials.A total of 143 datasets of material properties were collected to build prediction models,and five materials were experimentally determined for model validation.The results show that RF has excellent generalization,stability,and precision.AdaBoost has great generalization and precision,only slightly inferior to the former,and its stability is excellent.DT has good precision and acceptable generalization,but its stability is poor.SVR and KNN have superior stability,but their generalization and precision are inadequate.MLP lacks generalization,and its stability and precision are unacceptable.In short,RF has the best comprehensive performance,demonstrated by a limited prediction deviation of 26.3%from the experimental results,better than AdaBoost(38.0%)and DT(38.3%)and far better than other remaining methods.It is also found that data mining methods provide better predictions when cement-based materials’water vapor permeability is high.展开更多
Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which co...Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which combines POT with DBSCAN(POT-DBSCAN)to improve the prediction efficiency of wind power prediction model.Firstly,according to the data of WT in the normal operation condition,the power prediction model ofWT is established based on the Particle Swarm Optimization(PSO)Arithmetic which is combined with the BP Neural Network(PSO-BP).Secondly,the wind-power data obtained from the supervisory control and data acquisition(SCADA)system is preprocessed by the POT-DBSCAN method.Then,the power prediction of the preprocessed data is carried out by PSO-BP model.Finally,the necessity of preprocessing is verified by the indexes.This case analysis shows that the prediction result of POT-DBSCAN preprocessing is better than that of the Quartile method.Therefore,the accuracy of data and prediction model can be improved by using this method.展开更多
For random vibration of airborne platform, the accurate evaluation is a key indicator to ensure normal operation of airborne equipment in flight. However, only limited power spectral density(PSD) data can be obtaine...For random vibration of airborne platform, the accurate evaluation is a key indicator to ensure normal operation of airborne equipment in flight. However, only limited power spectral density(PSD) data can be obtained at the stage of flight test. Thus, those conventional evaluation methods cannot be employed when the distribution characteristics and priori information are unknown. In this paper, the fuzzy norm method(FNM) is proposed which combines the advantages of fuzzy theory and norm theory. The proposed method can deeply dig system information from limited data, which probability distribution is not taken into account. Firstly, the FNM is employed to evaluate variable interval and expanded uncertainty from limited PSD data, and the performance of FNM is demonstrated by confidence level, reliability and computing accuracy of expanded uncertainty. In addition, the optimal fuzzy parameters are discussed to meet the requirements of aviation standards and metrological practice. Finally, computer simulation is used to prove the adaptability of FNM. Compared with statistical methods, FNM has superiority for evaluating expanded uncertainty from limited data. The results show that the reliability of calculation and evaluation is superior to 95%.展开更多
基金supported by China Postdoctoral Science Foundation,Grant 2024M753544Science and Technology Project of CSG,Grant GDKJXM2022106.
文摘Improving the computational efficiency of multi-physics simulation and constructing a real-time online simulation method is an important way to realise the virtual-real fusion of entities and data of power equipment with digital twin.In this paper,a datadriven fast calculation method for the temperature field of resin impregnated paper(RIP)bushing used in converter transformer valve-side is proposed,which combines the data dimensionality reduction technology and the surrogate model.After applying the finite element algorithm to obtain the temperature field distribution of RIP bushing under different operation conditions as the input dataset,the proper orthogonal decomposition(POD)algorithm is adopted to reduce the order and obtain the low-dimensional projection of the temperature data.On this basis,the surrogate model is used to construct the mapping relationship between the sensor monitoring data and the low-dimensional projection,so that it can achieve the fast calculation and reconstruction of temperature field distribution.The results show that this method can effectively and quickly calculate the overall temperature field distribution of the RIP bushing.The maximum relative error and the average relative error are less than 4.5%and 0.25%,respectively.The calculation speed is at the millisecond level,meeting the needs of digitalisation of power equipment.
基金Supported by the PetroChina Science and Technology Project(2023ZZ0202)。
文摘The formation water sample in oil and gas fields may be polluted in processes of testing, trial production, collection, storage, transportation and analysis, making the properties of formation water not be reflected truly. This paper discusses identification methods and the data credibility evaluation method for formation water in oil and gas fields of petroliferous basins within China. The results of the study show that: (1) the identification methods of formation water include the basic methods of single factors such as physical characteristics, water composition characteristics, water type characteristics, and characteristic coefficients, as well as the comprehensive evaluation method of data credibility proposed on this basis, which mainly relies on the correlation analysis sodium chloride coefficient and desulfurization coefficient and combines geological background evaluation;(2) The basic identifying methods for formation water enable the preliminary identification of hydrochemical data and the preliminary screening of data on site, the proposed comprehensive method realizes the evaluation by classifying the CaCl2-type water into types A-I to A-VI and the NaHCO3-type water into types B-I to B-IV, so that researchers can make in-depth evaluation on the credibility of hydrochemical data and analysis of influencing factors;(3) When the basic methods are used to identify the formation water, the formation water containing anions such as CO_(3)^(2-), OH- and NO_(3)^(-), or the formation water with the sodium chloride coefficient and desulphurization coefficient not matching the geological setting, are all invaded with surface water or polluted by working fluid;(4) When the comprehensive method is used, the data credibility of A-I, A-II, B-I and B-II formation water can be evaluated effectively and accurately only if the geological setting analysis in respect of the factors such as formation environment, sampling conditions, condensate water, acid fluid, leaching of ancient weathering crust, and ancient atmospheric fresh water, is combined, although such formation water is believed with high credibility.
基金funded by National Natural Science Foundation of China(Grant Nos.42272333,42277147).
文摘Refined 3D modeling of mine slopes is pivotal for precise prediction of geological hazards.Aiming at the inadequacy of existing single modeling methods in comprehensively representing the overall and localized characteristics of mining slopes,this study introduces a new method that fuses model data from Unmanned aerial vehicles(UAV)tilt photogrammetry and 3D laser scanning through a data alignment algorithm based on control points.First,the mini batch K-Medoids algorithm is utilized to cluster the point cloud data from ground 3D laser scanning.Then,the elbow rule is applied to determine the optimal cluster number(K0),and the feature points are extracted.Next,the nearest neighbor point algorithm is employed to match the feature points obtained from UAV tilt photogrammetry,and the internal point coordinates are adjusted through the distanceweighted average to construct a 3D model.Finally,by integrating an engineering case study,the K0 value is determined to be 8,with a matching accuracy between the two model datasets ranging from 0.0669 to 1.0373 mm.Therefore,compared with the modeling method utilizing K-medoids clustering algorithm,the new modeling method significantly enhances the computational efficiency,the accuracy of selecting the optimal number of feature points in 3D laser scanning,and the precision of the 3D model derived from UAV tilt photogrammetry.This method provides a research foundation for constructing mine slope model.
基金supported by the National Key R&D Program of China Nos.2021YFC2203502 and 2022YFF0711502the National Natural Science Foundation of China(NSFC)(12173077 and 12003062)+5 种基金the Tianshan Innovation Team Plan of Xinjiang Uygur Autonomous Region(2022D14020)the Tianshan Talent Project of Xinjiang Uygur Autonomous Region(2022TSYCCX0095)the Scientific Instrument Developing Project of the Chinese Academy of Sciences,grant No.PTYQ2022YZZD01China National Astronomical Data Center(NADC)the Operation,Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments,budgeted from the Ministry of Finance of China(MOF)and administrated by the Chinese Academy of Sciences(CAS)Natural Science Foundation of Xinjiang Uygur Autonomous Region(2022D01A360)。
文摘To address the problem of real-time processing of ultra-wide bandwidth pulsar baseband data,we designed and implemented a pulsar baseband data processing algorithm(PSRDP)based on GPU parallel computing technology.PSRDP can perform operations such as baseband data unpacking,channel separation,coherent dedispersion,Stokes detection,phase and folding period prediction,and folding integration in GPU clusters.We tested the algorithm using the J0437-4715 pulsar baseband data generated by the CASPSR and Medusa backends of the Parkes,and the J0332+5434 pulsar baseband data generated by the self-developed backend of the Nan Shan Radio Telescope.We obtained the pulse profiles of each baseband data.Through experimental analysis,we have found that the pulse profiles generated by the PSRDP algorithm in this paper are essentially consistent with the processing results of Digital Signal Processing Software for Pulsar Astronomy(DSPSR),which verified the effectiveness of the PSRDP algorithm.Furthermore,using the same baseband data,we compared the processing speed of PSRDP with DSPSR,and the results showed that PSRDP was not slower than DSPSR in terms of speed.The theoretical and technical experience gained from the PSRDP algorithm research in this article lays a technical foundation for the real-time processing of QTT(Qi Tai radio Telescope)ultra-wide bandwidth pulsar baseband data.
文摘This paper introduces MultiPHydro,an in-house computational solver developed for simulating hydrodynamic and multiphase fluid—body interaction problems,with a specialized focus on multiphase flow dynamics.The solver employs the boundary data immersion method(BDIM)as its core numerical framework for handling fluid—solid interfaces.We briefly outline the governing equations and physical models integrated within MultiPHydro,including weakly-compressible flows,cavitation modeling,and the volume of fluid(VOF)method with piecewise-linear interface reconstruction.The solver’s accuracy and versatility are demonstrated through several numerical benchmarks:single-phase flow past a cylinder shows less than 10%error in vortex shedding frequency and under 4%error in hydrodynamic resistance;cavitating flows around a hydrofoil yield errors below 7%in maximum cavity length;water-entry cases exhibit under 5%error in displacement and velocity;and water-exit simulations predict cavity length within 7.2%deviation.These results confirm the solver’s capability to reliably model complex fluid-body interactions across various regimes.Future developments will focus on refining mathematical models,improving the modeling of phase-interaction mechanisms,and implementing GPU-accelerated parallel algorithms to enhance compatibility with domestically-developed operating systems and deep computing units(DCUs).
基金Financial support from the Fundamental Research Funds for the Central Universities(ZJ2022-003,JG2022-27,J2020-060,and J2021-060)Sichuan Province Engineering Technology Research Center of General Aircraft Maintenance(GAMRC2021YB08)the Young Scientists Fund of the National Natural Science Foundation of China(No.52105417)is acknowledged.
文摘To overcome the challenges of limited experimental data and improve the accuracy of empirical formulas,we propose a low-cycle fatigue(LCF)life prediction model for nickel-based superalloys using a data augmentation method.This method utilizes a variational autoencoder(VAE)to generate low-cycle fatigue data and form an augmented dataset.The Pearson correlation coefficient(PCC)is employed to verify the similarity of feature distributions between the original and augmented datasets.Six machine learning models,namely random forest(RF),artificial neural network(ANN),support vector machine(SVM),gradient-boosted decision tree(GBDT),eXtreme Gradient Boosting(XGBoost),and Categorical Boosting(CatBoost),are utilized to predict the LCF life of nickel-based superalloys.Results indicate that the proposed data augmentation method based on VAE can effectively expand the dataset,and the mean absolute error(MAE),root mean square error(RMSE),and R-squared(R^(2))values achieved using the CatBoost model,with respective values of 0.0242,0.0391,and 0.9538,are superior to those of the other models.The proposed method reduces the cost and time associated with LCF experiments and accurately establishes the relationship between fatigue characteristics and LCF life of nickel-based superalloys.
基金supported by China National Astronomical Data Center(NADC),CAS Astronomical Data Center and Chinese Virtual Observatory(China-VO)supported by Astronomical Big Data Joint Research Center,co-founded by National Astronomical Observatories,Chinese Academy of Sciences and Alibaba Cloud。
文摘Ancient stellar observations are a valuable cultural heritage,profoundly influencing both cultural domains and modern astronomical research.Shi’s Star Catalog(石氏星经),the oldest extant star catalog in China,faces controversy regarding its observational epoch.Determining this epoch via precession assumes accurate ancient coordinates and correspondence with contemporary stars,posing significant challenges.This study introduces a novel method using the Generalized Hough Transform to ascertain the catalog’s observational epoch.This approach statistically accommodates errors in ancient coordinates and discrepancies between ancient and modern stars,addressing limitations in prior methods.Our findings date Shi’s Star Catalog to the 4th century BCE,with 2nd-century CE adjustments.In comparison,the Western tradition’s oldest known catalog,the Ptolemaic Star Catalog(2nd century CE),likely derives from the Hipparchus Star Catalog(2nd century BCE).Thus,Shi’s Star Catalog is identified as the world’s oldest known star catalog.Beyond establishing its observation period,this study aims to consolidate and digitize these cultural artifacts.
基金supported by the Strategic Priority Research Program of Chinese Academy of Sciences(grant No.XDA0350502)the National SKA Program of China(grant No.2020SKA0120103)the National Natural Science Foundation of China(NSFC,Grant Nos.U1831130 and 11973046)。
文摘Clock difference between the ensemble pulsar timescale(PT)and the International Atomic Time(TAI)PT-TAI derived from the International Pulsar Timing Array(IPTA)data set indicates a very similar variation trend with the Terrestrial Time TT(BIPMXXXX)-TAI but PT has larger measurement error.In this paper,we discuss the smoothing method of PT using a combined smoothing filter and compare the results with that from other filters.The clock difference sequence between PT-TAI and the first time derivative series of the TT(BIPMXXXX)-TAI can be combined by a combined smoothing filter to yield two smooth curves tied by the constraints assuring that the latter is the derivative of the former.The ensemble pulsar time IPTA2016 with respect to TAI published by G.Hobbs et al.and first time derivative series of the TT(BIPM2017)-TAI with quadratic polynomial terms removed are processed by combined smoothing filter in order to demonstrate the properties of the smoothed results.How to correctly estimate two smoothing coefficients is described and the output results of the combined smoothing filter are analyzed.The results show that the combined smoothing method efficiently removes high frequency noises of two input data series and the smoothed data of the PT-TAI combine long term fractional frequency stability of the pulsar time and frequency accuracy of the terrestrial time.Fractional frequency stability analysis indicates that both short and medium time interval stability of the smoothed PT-TAI is improved while keeping its original long term frequency stability level.The combined smoothing filter is more suitable for smoothing observational pulsar timescale data than any filter that only performs smoothing of a single pulsar time series.The smoothed pulsar time by combined smoothing filter is a pulsar atomic time combined timescale.This kind of combined timescale can also be used as terrestrial time.
基金supported by the National Natural Science Foundation of China under grant No.U2031144partially supported by the Open Project Program of the Key Laboratory of Optical Astronomy,National Astronomical Observatories,Chinese Academy of Sciences+5 种基金supported by the National Key R&D Program of China with No.2021YFA1600404the National Natural Science Foundation of China(12173082)the Yunnan Fundamental Research Projects(grant 202201AT070069)the Top-notch Young Talents Program of Yunnan Provincethe Light of West China Program provided by the Chinese Academy of Sciencesthe International Centre of Supernovae,Yunnan Key Laboratory(No.202302AN360001)。
文摘BFOSC and YFOSC are the most frequently used instruments in the Xinglong 2.16 m telescope and Lijiang 2.4 m telescope,respectively.We developed a software package named“BYSpec”(BFOSC and YFOSC Spectra Reduction Package)dedicated to automatically reducing the long-slit and echelle spectra obtained by these two instruments.The package supports bias and flat-fielding correction,order location,background subtraction,automatic wavelength calibration,and absolute flux calibration.The optimal extraction method maximizes the signal-to-noise ratio and removes most of the cosmic rays imprinted in the spectra.A comparison with the 1D spectra reduced with IRAF verifies the reliability of the results.This open-source software is publicly available to the community.
文摘Lunar wrinkle ridges are an important stress geological structure on the Moon, which reflect the stress state and geological activity on the Moon. They provide important insights into the evolution of the Moon and are key factors influencing future lunar activity, such as the choice of landing sites. However, automatic extraction of lunar wrinkle ridges is a challenging task due to their complex morphology and ambiguous features. Traditional manual extraction methods are time-consuming and labor-intensive. To achieve automated and detailed detection of lunar wrinkle ridges, we have constructed a lunar wrinkle ridge data set, incorporating previously unused aspect data to provide edge information, and proposed a Dual-Branch Ridge Detection Network(DBR-Net) based on deep learning technology. This method employs a dual-branch architecture and an Attention Complementary Feature Fusion module to address the issue of insufficient lunar wrinkle ridge features. Through comparisons with the results of various deep learning approaches, it is demonstrated that the proposed method exhibits superior detection performance. Furthermore, the trained model was applied to lunar mare regions, generating a distribution map of lunar mare wrinkle ridges;a significant linear relationship between the length and area of the lunar wrinkle ridges was obtained through statistical analysis, and six previously unrecorded potential lunar wrinkle ridges were detected. The proposed method upgrades the automated extraction of lunar wrinkle ridges to a pixel-level precision and verifies the effectiveness of DBR-Net in lunar wrinkle ridge detection.
基金funded by the National Key Research and Development Program’s intergovernmental International Science and Technology Innovation Cooperation project,titled Remote Sensing and Radio Astronomy Observation of Space Weather in Low and Middle Latitudes(project number:2022YFE0140000)Supported by International Partnership Program of Chinese Academy of Sciences,grant No.114A11KYSB20200001。
文摘The 21 cm radiation of neutral hydrogen provides crucial information for studying the early universe and its evolution.To advance this research,countries have made significant investments in constructing large lowfrequency radio telescope arrays,such as the Low Frequency Array and the Square Kilometre Array Phase 1 Low Frequency.These instruments are pivotal for radio astronomy research.However,challenges such as ionospheric plasma interference,ambient radio noise,and instrument-related effects have become increasingly prominent,posing major obstacles in cosmology research.To address these issues,this paper proposes an efficient signal processing method that combines wavelet transform and mathematical morphology.The method involves the following steps:Background Subtraction:Background interference in radio observation signals is eliminated.Wavelet Transform:The signal,after removing background noise,undergoes a two-dimensional discrete wavelet transform.Threshold processing is then applied to the wavelet coefficients to effectively remove interference components.Wavelet Inversion:The processed signal is reconstructed using wavelet inversion.Mathematical Morphology:The reconstructed signal is further optimized using mathematical morphology to refine the results.Experimental verification was conducted using solar observation data from the Xinjiang Observatory and the Yunnan Observatory.The results demonstrate that this method successfully removes interference signals while preserving useful signals,thus improving the accuracy of radio astronomy observations and reducing the impact of radio frequency interference.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.
基金supported by the Young Scientists Fund of the National Natural Science Foundation of China(Grant No.51205283)
文摘This paper introduced the basic theory and algorithm of the surrogate data method, which proposed a rigorous way to detect the random and seemingly stochastic characteristics in a system. The Gaussian data and the Rossler data were used to show the availability and effectivity of this method. According to the analysis by this method based on the short-circuiting current signals under the conditions of the same voltage and different wire feed speeds, it is demonstrated that the electrical signals time series exhibit apparently randomness when the welding parameters do not match. However, the electrical signals time series are deterministic when a match is found. The stability of short-circuiting transfer process could be judged exactly by the method of surrogate data.
文摘It is not reasonable that one can only use the adjoint of model in data assimilation. The simulated numerical experiment shows that for the tidal model, the result of the adjoint of equation is almost the same as that of the adjoint of model: the averaged absolute difference of the amplitude between observations and simulation is less than 5.0 cm and that of the phase-lag is less than 5.0°. The results are both in good agreement with the observed M2 tide in the Bohai Sea and the Yellow Sea. For comparison, the traditional methods also have been used to simulate M2 tide in the Bohai Sea and the Yellow Sea. The initial guess values of the boundary conditions are given first, and then are adjusted to acquire the simulated results that are as close as possible to the observations. As the boundary conditions contain 72 values, which should be adjusted and how to adjust them can only be partially solved by adjusting them many times. The satisfied results are hard to acquire even gigantic efforts are done. Here, the automation of the treatment of the open boundary conditions is realized. The method is unique and superior to the traditional methods. It is emphasized that if the adjoint of equation is used, tedious and complicated mathematical deduction can be avoided. Therefore the adjoint of equation should attract much attention.
基金supported by Scientific Research Special Project of TCM Profession (200907001E)Science and Technology Special Major Project for "Significant New Drugs Formulation" (2009ZX09301-005-02)
文摘Objective:To analyze the component law of Chinese patent medicines for anti-influenza and develop new prescriptions for anti-influenza by unsupervised data mining methods. Methods: Chinese patent medicine recipes for anti-influenza were collected and recorded in the database, and then the correlation coefficient between herbs, core combinations of herbs and new prescriptions were analyzed by using modified mutual information, complex system entropy cluster and unsupervised hierarchical clustering, respectively. Results: Based on analysis of 126 Chinese patent medicine recipes, the frequency of each herb occurrence in these recipes, 54 frequently-used herb pairs, 34 core combinations were determined, and 4 new recipes for influenza were developed. Conclusion: Unsupervised data mining methods are able to mine the component law quickly and develop new prescriptions.
基金The Innovation Program of Shanghai Municipal Education Commission under contract No.14ZZ147the Opening Project of Key Laboratory of Sustainable Exploitation of Oceanic Fisheries Resources(Shanghai Ocean University),Ministry of Education under contract No.A1-0209-15-0503-1
文摘It is widely recognized that assessments of the status of data-poor fish stocks are challenging and that Bayesian analysis is one of the methods which can be used to improve the reliability of stock assessments in data-poor situations through borrowing strength from prior information deduced from species with good-quality data or other known information. Because there is considerable uncertainty remaining in the stock assessment of albacore tuna(Thunnus alalunga) in the Indian Ocean due to the limited and low-quality data, we investigate the advantages of a Bayesian method in data-poor stock assessment by using Indian Ocean albacore stock assessment as an example. Eight Bayesian biomass dynamics models with different prior assumptions and catch data series were developed to assess the stock. The results show(1) the rationality of choice of catch data series and assumption of parameters could be enhanced by analyzing the posterior distribution of the parameters;(2) the reliability of the stock assessment could be improved by using demographic methods to construct a prior for the intrinsic rate of increase(r). Because we can make use of more information to improve the rationality of parameter estimation and the reliability of the stock assessment compared with traditional statistical methods by incorporating any available knowledge into the informative priors and analyzing the posterior distribution based on Bayesian framework in data-poor situations, we suggest that the Bayesian method should be an alternative method to be applied in data-poor species stock assessment, such as Indian Ocean albacore.
基金supported by Southern Marine Science and Engineering Guangdong Laboratory(Zhanjiang)(No.ZJW-2019-04)Cooperative Innovation Center of Unconventional Oil and Gas(Ministry of Education&Hubei Province),Yangtze University(No.UOG2020-17)the National Natural Science Foundation of China(No.51874044,51922007)。
文摘A data-space inversion(DSI)method has been recently proposed and successfully applied to the history matching and production prediction of reservoirs.Based on Bayesian theory,DSI can directly and effectively obtain good posterior flow predictions without inversion of geological parameters of reservoir model.This paper presents an improved DSI method to fast predict reservoir state fields(e.g.saturation and pressure profiles)via observed production data.Firstly,a large number of production curves and state data are generated by reservoir model simulation to expand the data space of original DSI.Then,efficient history matching only on the observed production data is carried out via the original DSI to obtain related parameters which reflects the weight of the real reservoir model relative to prior reservoir models.Finally,those parameters are used to predict the oil saturation and pressure profiles of the real reservoir model by combining large amounts of state data of prior reservoir models.Two examples including conventional heterogeneous and unconventional fractured reservoir are implemented to test the performances of predicting saturation and pressure profiles of this improved DSI method.Besides,this method is also tested in a real field and the obtained results show the high computational efficiency and high accuracy of the practical application of this method.
基金supported by the National Natural Science Foundation of China(No.52178065).
文摘Water vapor permeability of building materials is a crucial parameter for analysing and optimizing the hygrothermal performance of building envelopes and built environments.Its measurement is accurate but time-consuming,while data mining methods have the potential to predict water vapor permeability efficiently.In this study,six data mining methods—support vector regression(SVR),decision tree regression(DT),random forest regression(RF),K-nearest neighbor(KNN),multi-layer perceptron(MLP),and adaptive boosting regression(AdaBoost)—were compared to predict the water vapor permeability of cement-based materials.A total of 143 datasets of material properties were collected to build prediction models,and five materials were experimentally determined for model validation.The results show that RF has excellent generalization,stability,and precision.AdaBoost has great generalization and precision,only slightly inferior to the former,and its stability is excellent.DT has good precision and acceptable generalization,but its stability is poor.SVR and KNN have superior stability,but their generalization and precision are inadequate.MLP lacks generalization,and its stability and precision are unacceptable.In short,RF has the best comprehensive performance,demonstrated by a limited prediction deviation of 26.3%from the experimental results,better than AdaBoost(38.0%)and DT(38.3%)and far better than other remaining methods.It is also found that data mining methods provide better predictions when cement-based materials’water vapor permeability is high.
基金National Natural Science Foundation of China(Nos.51875199 and 51905165)Hunan Natural Science Fund Project(2019JJ50186)the Ke7y Research and Development Program of Hunan Province(No.2018GK2073).
文摘Due to the frequent changes of wind speed and wind direction,the accuracy of wind turbine(WT)power prediction using traditional data preprocessing method is low.This paper proposes a data preprocessing method which combines POT with DBSCAN(POT-DBSCAN)to improve the prediction efficiency of wind power prediction model.Firstly,according to the data of WT in the normal operation condition,the power prediction model ofWT is established based on the Particle Swarm Optimization(PSO)Arithmetic which is combined with the BP Neural Network(PSO-BP).Secondly,the wind-power data obtained from the supervisory control and data acquisition(SCADA)system is preprocessed by the POT-DBSCAN method.Then,the power prediction of the preprocessed data is carried out by PSO-BP model.Finally,the necessity of preprocessing is verified by the indexes.This case analysis shows that the prediction result of POT-DBSCAN preprocessing is better than that of the Quartile method.Therefore,the accuracy of data and prediction model can be improved by using this method.
基金supported by Aeronautical Science Foundation of China (No. 20100251006)Technological Foundation Project of China (No. J132012C001)
文摘For random vibration of airborne platform, the accurate evaluation is a key indicator to ensure normal operation of airborne equipment in flight. However, only limited power spectral density(PSD) data can be obtained at the stage of flight test. Thus, those conventional evaluation methods cannot be employed when the distribution characteristics and priori information are unknown. In this paper, the fuzzy norm method(FNM) is proposed which combines the advantages of fuzzy theory and norm theory. The proposed method can deeply dig system information from limited data, which probability distribution is not taken into account. Firstly, the FNM is employed to evaluate variable interval and expanded uncertainty from limited PSD data, and the performance of FNM is demonstrated by confidence level, reliability and computing accuracy of expanded uncertainty. In addition, the optimal fuzzy parameters are discussed to meet the requirements of aviation standards and metrological practice. Finally, computer simulation is used to prove the adaptability of FNM. Compared with statistical methods, FNM has superiority for evaluating expanded uncertainty from limited data. The results show that the reliability of calculation and evaluation is superior to 95%.