Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemio...Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology.展开更多
Latent factor(LF)models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS)matrices which are commonly seen in various industrial applications.An LF model usually adopts iterativ...Latent factor(LF)models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS)matrices which are commonly seen in various industrial applications.An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost.Hence,determining how to accelerate the training process for LF models has become a significant issue.To address this,this work proposes a randomized latent factor(RLF)model.It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices,thereby greatly alleviating computational burden.It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models,RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices,which is especially desired for industrial applications demanding highly efficient models.展开更多
Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be so...Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16.展开更多
Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of th...Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.展开更多
Because all the known integrable models possess Schwarzian forms with Mobious transformation invariance,it may be one of the best ways to find new integrable models starting from some suitable Mobious transformation i...Because all the known integrable models possess Schwarzian forms with Mobious transformation invariance,it may be one of the best ways to find new integrable models starting from some suitable Mobious transformation invariant equations. In this paper, we study the Painlevé integrability of some special (3+1)-dimensional Schwarzian models.展开更多
The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the effi...The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.展开更多
Cloud diurnal variation is crucial for regulating cloud radiative effects and atmospheric dynamics.However,it is often overlooked in the evaluation and development of climate models.Thus,this study aims to investigate...Cloud diurnal variation is crucial for regulating cloud radiative effects and atmospheric dynamics.However,it is often overlooked in the evaluation and development of climate models.Thus,this study aims to investigate the daily mean(CFR)and diurnal variation(CDV)of cloud fraction across high-,middle-,low-level,and total clouds in the FGOALS-f3-L general circulation model.The bias of total CDV is decomposed into the model biases in CFRs and CDVs of clouds at all three levels.Results indicate that the model generally underestimates low-level cloud fraction during the daytime and high-/middle-level cloud fraction at nighttime.The simulation biases of low clouds,especially their CDV biases,dominate the bias of total CDV.Compensation effects exist among the bias decompositions,where the negative contributions of underestimated daytime low-level cloud fraction are partially offset by the opposing contributions from biases in high-/middle-level clouds.Meanwhile,the bias contributions have notable land–ocean differences and region-dependent characteristics,consistent with the model biases in these variables.Additionally,the study estimates the influences of CFR and CDV biases on the bias of shortwave cloud radiative effects.It reveals that the impacts of CDV biases can reach half of those from CFR biases,highlighting the importance of accurate CDV representation in climate models.展开更多
Existing reverse-engineering methods struggle to directly generate editable,parametric CAD models from scanned data.To address this limitation,this paper proposes a reverse-modeling approach that reconstructs parametr...Existing reverse-engineering methods struggle to directly generate editable,parametric CAD models from scanned data.To address this limitation,this paper proposes a reverse-modeling approach that reconstructs parametric CAD models from multi-view RGB-D point clouds.Multi-frame point-cloud registration and fusion are first employed to obtain a complete 3-D point cloud of the target object.A region-growing algorithm that jointly exploits color and geometric information segments the cloud,while RANSAC robustly detects and fits basic geometric primitives.These primitives serve as nodes in a graph whose edge features are inferred by a graph neural network to capture spatial constraints.From the detected primitives and their constraints,a high-accuracy,fully editable parametric CAD model is finally exported.Experiments show an average parameter error of 0.3 mm for key dimensions and an overall geometric reconstruction accuracy of 0.35 mm.The work offers an effective technical route toward automated,intelligent 3-D reverse modeling.展开更多
Pronounced climatic differences occur over subtropical South China(SC)and tropical South China Sea(SCS)and understanding the key cloud-radiation characteristics is essential to simulating East Asian climate.This study...Pronounced climatic differences occur over subtropical South China(SC)and tropical South China Sea(SCS)and understanding the key cloud-radiation characteristics is essential to simulating East Asian climate.This study investigated cloud fractions and cloud radiative effects(CREs)over SC and SCS simulated by CMIP6 atmospheric models.Remarkable differences in cloud-radiation characteristics appeared over these two regions.In observations,considerable amounts of low-middle level clouds and cloud radiative cooling effect appeared over SC.In contrast,high clouds prevailed over SCS,where longwave and shortwave CREs offset each other,resulting in a weaker net cloud radiative effect(NCRE).The models underestimated NCRE over SC mainly due to weaker shortwave CRE and less cloud fractions.Conversely,most models overestimated NCRE over SCS because of stronger shortwave CRE and weaker longwave CRE.Regional CREs were closely linked to their dominant cloud fractions.Both observations and simulations showed a negative spatial correlation between total(low)cloud fraction and shortwave CRE over SC,especially in winter,and exhibited a positive correlation between high cloud fraction and longwave CRE over these two regions.Compared with SCS,most models overestimated the spatial correlation between low(high)cloud fraction and SWCRE(LWCRE)over SC,with larger bias ranges among models,indicating the exaggerated cloud radiative cooling(warming)effect caused by low(high)clouds.Moreover,most models struggled to describe regional ascent and its connection with CREs over SC while they can better reproduce these connections over SCS.This study further suggests that reasonable circulation conditions are crucial to simulating well cloud-radiation characteristics over the East Asian regions.展开更多
DNAN-based insensitive melt-cast explosives have been widely utilized in insensitive munition in recent years. When constrained DNAN-based melt-cast explosives are ignited under thermal stimulation, the base explosive...DNAN-based insensitive melt-cast explosives have been widely utilized in insensitive munition in recent years. When constrained DNAN-based melt-cast explosives are ignited under thermal stimulation, the base explosive exists in a molten liquid state, where high-temperature gases expand and react in the form of bubble clouds within the liquid explosive;this process is distinctly different from the dynamic crack propagation process observed in the case of solid explosives. In this study, a control model for the reaction evolution of burning-bubble clouds was established to describe the reaction process and quantify the reaction violence of DNAN-based melt-cast explosives, considering the size distribution and activation mechanism of the burning-bubble clouds. The feasibility of the model was verified through experimental results. The results revealed that under geometrically similar conditions, with identical confinement strength and aspect ratio, larger charge structures led to extended initial gas flow and surface burning processes, resulting in greater reaction equivalence and violence at the casing fracture.Under constant charge volume and size, a stronger casing confinement accelerated self-enhanced burning, increasing the internal pressure, reaction degree, and reaction violence. Under a constant casing thickness and radius, higher aspect ratios led to a greater reaction violence at the casing fracture.Moreover, under a constant charge volume and casing thickness, higher aspect ratios resulted in a higher internal pressure, increased reaction degree, and greater reaction violence at the casing fracture. Further,larger ullage volumes extended the reaction evolution time and increased the reaction violence under constant casing dimensions. Through a matching design of the opening threshold of the pressure relief holes and the relief structure area, a stable burning reaction could be maintained until completion,thereby achieving a control of the reaction violence. The proposed model could effectively reflect the effects of the intrinsic burning rate, casing confinement strength, charge size, ullage volume, and pressure relief structure on the reaction evolution process and reaction violence, providing a theoretical method for the thermal safety design and reaction violence evaluation of melt-cast explosives.展开更多
The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreach...The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.展开更多
This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficient...This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficients of the parameter part of the Varying Index Coefficient Model (VICM), while the unknown function part uses the B-spline to expand. Moreover, we combine the above two estimation methods under the assumption of high-dimensional data. The results of data simulation and empirical analysis show that for the varying index coefficient model, the re-adjusted cross-validation method is better in terms of accuracy and stability than traditional methods based on ordinary least squares.展开更多
Model averaging has attracted increasing attention in recent years for the analysis of high-dimensional data. By weighting several competing statistical models suitably, model averaging attempts to achieve stable and ...Model averaging has attracted increasing attention in recent years for the analysis of high-dimensional data. By weighting several competing statistical models suitably, model averaging attempts to achieve stable and improved prediction. To obtain a better understanding of the available model averaging methods, their properties and the relationships between them, this paper is devoted to make a review on some recent progresses in high-dimensional model averaging from the frequentist perspective. Some future research topics are also discussed.展开更多
The method of cloud model with entropy weight was adopted for the prediction of rock burst classification. Some main factors of rock burst including the uniaxial compressive strength (σc), the tensile strength (σ...The method of cloud model with entropy weight was adopted for the prediction of rock burst classification. Some main factors of rock burst including the uniaxial compressive strength (σc), the tensile strength (σt), the tangential stress (σθ), the rock brittleness coefficient (σc/σt), the stress coefficient (σθ /σc) and the elastic energy index (Wet) are chosen to establish evaluation index system. The entropy?cloud model and criterion are obtained through 209 sets of rock burst samples from underground rock projects. The sensitivity of indicators is analyzed and 209 sets of rock burst samples are discriminated by this model. The discriminant results of the entropy-cloud model are compared with those of Bayes, KNN and RF methods. The results show that the sensitivity order of those factors from high to low is σ_θ /σ_c, σ_θ, W_(ct), σ_c/σ_t, σ_t, σ_c, and the entropy-cloud model has higher accuracy than Bayes, K-Nearest Neighbor algorithm (KNN) and Random Forest (RF) methods.展开更多
Aiming at the real-time fluctuation and nonlinear characteristics of the expressway short-term traffic flow forecasting the parameter projection pursuit regression PPPR model is applied to forecast the expressway traf...Aiming at the real-time fluctuation and nonlinear characteristics of the expressway short-term traffic flow forecasting the parameter projection pursuit regression PPPR model is applied to forecast the expressway traffic flow where the orthogonal Hermite polynomial is used to fit the ridge functions and the least square method is employed to determine the polynomial weight coefficient c.In order to efficiently optimize the projection direction a and the number M of ridge functions of the PPPR model the chaos cloud particle swarm optimization CCPSO algorithm is applied to optimize the parameters. The CCPSO-PPPR hybrid optimization model for expressway short-term traffic flow forecasting is established in which the CCPSO algorithm is used to optimize the optimal projection direction a in the inner layer while the number M of ridge functions is optimized in the outer layer.Traffic volume weather factors and travel date of the previous several time intervals of the road section are taken as the input influencing factors. Example forecasting and model comparison results indicate that the proposed model can obtain a better forecasting effect and its absolute error is controlled within [-6,6] which can meet the application requirements of expressway traffic flow forecasting.展开更多
Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes...Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes from two dimensional contours. With the development of measuring equipment, cloud points that contain more details of the object can be obtained conveniently. On the other hand, large quantity of sampled points brings difficulties to model reconstruction method. This paper first presents an algorithm to automatically reduce the number of cloud points under given tolerance. Triangle mesh surface from the simplified data set is reconstructed by the marching cubes algorithm. For various reasons, reconstructed mesh usually contains unwanted holes. An approach to create new triangles is proposed with optimized shape for covering the unexpected holes in triangle meshes. After hole filling, watertight triangle mesh can be directly output in STL format, which is widely used in rapid prototype manufacturing. Practical examples are included to demonstrate the method.展开更多
In order to reduce amount of data storage and improve processing capacity of the system, this paper proposes a new classification method of data source by combining phase synchronization model in network clusteri...In order to reduce amount of data storage and improve processing capacity of the system, this paper proposes a new classification method of data source by combining phase synchronization model in network clustering with cloud model. Firstly, taking data source as a complex network, after the topography of network is obtained, the cloud model of each node data is determined by fuzzy analytic hierarchy process (AHP). Secondly, by calculating expectation, entropy and hyper entropy of the cloud model, comprehensive coupling strength is got and then it is regarded as the edge weight of topography. Finally, distribution curve is obtained by iterating the phase of each node by means of phase synchronization model. Thus classification of data source is completed. This method can not only provide convenience for storage, cleaning and compression of data, but also improve the efficiency of data analysis.展开更多
Recommender system is an important content in the research of E-commerce technology. Collaborative filtering recom-mendation algorithm has already been used successfully at recom-mender system. However,with the develo...Recommender system is an important content in the research of E-commerce technology. Collaborative filtering recom-mendation algorithm has already been used successfully at recom-mender system. However,with the development of E-commerce,the difficulties of the extreme sparsity of user rating data have become more and more severe. Based on the traditional similarity measuring methods,we introduce the cloud model and combine it with the item-based collaborative filtering recommendation algorithms. The new collaborative filtering recommendation algorithm based on item and cloud model (IC-Based CF) computes the similarity de-gree between items by comparing the statistical characteristic of items. The experimental results show that this method can improve the performance of the present item-based collaborative filtering algorithm with extreme sparsity of data.展开更多
The motion of particle clouds formed by dumping dredged material into quiescent waters is experimentally and numerically studied. In the numerical model, the particle phase is modeled by the dispersion model, and turb...The motion of particle clouds formed by dumping dredged material into quiescent waters is experimentally and numerically studied. In the numerical model, the particle phase is modeled by the dispersion model, and turbulence is calculated by the large eddy simulation. The governing equations, including the filtered Navier-Stokes equations and mass transport equation, are solved based on the operator-splitting algorithm and an implicit cubic spline interpolation scheme. The eddy viscosity is evaluated by the modified Smagorinsky model including the buoyancy term. Comparisons of main flow characteristics, including shape, size, average density excess, moving speed and the amount of particles deposited on the bed, between experimental and computational results show that the numerical model well predicts the motion of the cloud from the falling to spreading stage. The effects of silt-fence on the motion of the particle cloud are also investigated.展开更多
基金supported in part by the Young Scientists Fund of the National Natural Science Foundation of China(Grant Nos.82304253)(and 82273709)the Foundation for Young Talents in Higher Education of Guangdong Province(Grant No.2022KQNCX021)the PhD Starting Project of Guangdong Medical University(Grant No.GDMUB2022054).
文摘Objective Humans are exposed to complex mixtures of environmental chemicals and other factors that can affect their health.Analysis of these mixture exposures presents several key challenges for environmental epidemiology and risk assessment,including high dimensionality,correlated exposure,and subtle individual effects.Methods We proposed a novel statistical approach,the generalized functional linear model(GFLM),to analyze the health effects of exposure mixtures.GFLM treats the effect of mixture exposures as a smooth function by reordering exposures based on specific mechanisms and capturing internal correlations to provide a meaningful estimation and interpretation.The robustness and efficiency was evaluated under various scenarios through extensive simulation studies.Results We applied the GFLM to two datasets from the National Health and Nutrition Examination Survey(NHANES).In the first application,we examined the effects of 37 nutrients on BMI(2011–2016 cycles).The GFLM identified a significant mixture effect,with fiber and fat emerging as the nutrients with the greatest negative and positive effects on BMI,respectively.For the second application,we investigated the association between four pre-and perfluoroalkyl substances(PFAS)and gout risk(2007–2018 cycles).Unlike traditional methods,the GFLM indicated no significant association,demonstrating its robustness to multicollinearity.Conclusion GFLM framework is a powerful tool for mixture exposure analysis,offering improved handling of correlated exposures and interpretable results.It demonstrates robust performance across various scenarios and real-world applications,advancing our understanding of complex environmental exposures and their health impacts on environmental epidemiology and toxicology.
基金supported in part by the National Natural Science Foundation of China (6177249391646114)+1 种基金Chongqing research program of technology innovation and application (cstc2017rgzn-zdyfX0020)in part by the Pioneer Hundred Talents Program of Chinese Academy of Sciences
文摘Latent factor(LF)models are highly effective in extracting useful knowledge from High-Dimensional and Sparse(HiDS)matrices which are commonly seen in various industrial applications.An LF model usually adopts iterative optimizers,which may consume many iterations to achieve a local optima,resulting in considerable time cost.Hence,determining how to accelerate the training process for LF models has become a significant issue.To address this,this work proposes a randomized latent factor(RLF)model.It incorporates the principle of randomized learning techniques from neural networks into the LF analysis of HiDS matrices,thereby greatly alleviating computational burden.It also extends a standard learning process for randomized neural networks in context of LF analysis to make the resulting model represent an HiDS matrix correctly.Experimental results on three HiDS matrices from industrial applications demonstrate that compared with state-of-the-art LF models,RLF is able to achieve significantly higher computational efficiency and comparable prediction accuracy for missing data.I provides an important alternative approach to LF analysis of HiDS matrices,which is especially desired for industrial applications demanding highly efficient models.
基金Heilongjiang Provincial Natural Science Foundation of China (LH2021F009)。
文摘Anti-jamming performance evaluation has recently received significant attention. For Link-16, the anti-jamming performance evaluation and selection of the optimal anti-jamming technologies are urgent problems to be solved. A comprehensive evaluation method is proposed, which combines grey relational analysis (GRA) and cloud model, to evaluate the anti-jamming performances of Link-16. Firstly, on the basis of establishing the anti-jamming performance evaluation indicator system of Link-16, the linear combination of analytic hierarchy process(AHP) and entropy weight method (EWM) are used to calculate the combined weight. Secondly, the qualitative and quantitative concept transformation model, i.e., the cloud model, is introduced to evaluate the anti-jamming abilities of Link-16 under each jamming scheme. In addition, GRA calculates the correlation degree between evaluation indicators and the anti-jamming performance of Link-16, and assesses the best anti-jamming technology. Finally, simulation results prove that the proposed evaluation model can achieve the objective of feasible and practical evaluation, which opens up a novel way for the research of anti-jamming performance evaluations of Link-16.
基金supported By Grant (PLN2022-14) of State Key Laboratory of Oil and Gas Reservoir Geology and Exploitation (Southwest Petroleum University)。
文摘Well logging technology has accumulated a large amount of historical data through four generations of technological development,which forms the basis of well logging big data and digital assets.However,the value of these data has not been well stored,managed and mined.With the development of cloud computing technology,it provides a rare development opportunity for logging big data private cloud.The traditional petrophysical evaluation and interpretation model has encountered great challenges in the face of new evaluation objects.The solution research of logging big data distributed storage,processing and learning functions integrated in logging big data private cloud has not been carried out yet.To establish a distributed logging big-data private cloud platform centered on a unifi ed learning model,which achieves the distributed storage and processing of logging big data and facilitates the learning of novel knowledge patterns via the unifi ed logging learning model integrating physical simulation and data models in a large-scale functional space,thus resolving the geo-engineering evaluation problem of geothermal fi elds.Based on the research idea of“logging big data cloud platform-unifi ed logging learning model-large function space-knowledge learning&discovery-application”,the theoretical foundation of unified learning model,cloud platform architecture,data storage and learning algorithm,arithmetic power allocation and platform monitoring,platform stability,data security,etc.have been carried on analysis.The designed logging big data cloud platform realizes parallel distributed storage and processing of data and learning algorithms.The feasibility of constructing a well logging big data cloud platform based on a unifi ed learning model of physics and data is analyzed in terms of the structure,ecology,management and security of the cloud platform.The case study shows that the logging big data cloud platform has obvious technical advantages over traditional logging evaluation methods in terms of knowledge discovery method,data software and results sharing,accuracy,speed and complexity.
文摘Because all the known integrable models possess Schwarzian forms with Mobious transformation invariance,it may be one of the best ways to find new integrable models starting from some suitable Mobious transformation invariant equations. In this paper, we study the Painlevé integrability of some special (3+1)-dimensional Schwarzian models.
基金supported by the Innovation Fund Project of the Gansu Education Department(Grant No.2021B-099).
文摘The objective of reliability-based design optimization(RBDO)is to minimize the optimization objective while satisfying the corresponding reliability requirements.However,the nested loop characteristic reduces the efficiency of RBDO algorithm,which hinders their application to high-dimensional engineering problems.To address these issues,this paper proposes an efficient decoupled RBDO method combining high dimensional model representation(HDMR)and the weight-point estimation method(WPEM).First,we decouple the RBDO model using HDMR and WPEM.Second,Lagrange interpolation is used to approximate a univariate function.Finally,based on the results of the first two steps,the original nested loop reliability optimization model is completely transformed into a deterministic design optimization model that can be solved by a series of mature constrained optimization methods without any additional calculations.Two numerical examples of a planar 10-bar structure and an aviation hydraulic piping system with 28 design variables are analyzed to illustrate the performance and practicability of the proposed method.
基金supported by the National Natural Science Foundation of China[grant number 42275074].
文摘Cloud diurnal variation is crucial for regulating cloud radiative effects and atmospheric dynamics.However,it is often overlooked in the evaluation and development of climate models.Thus,this study aims to investigate the daily mean(CFR)and diurnal variation(CDV)of cloud fraction across high-,middle-,low-level,and total clouds in the FGOALS-f3-L general circulation model.The bias of total CDV is decomposed into the model biases in CFRs and CDVs of clouds at all three levels.Results indicate that the model generally underestimates low-level cloud fraction during the daytime and high-/middle-level cloud fraction at nighttime.The simulation biases of low clouds,especially their CDV biases,dominate the bias of total CDV.Compensation effects exist among the bias decompositions,where the negative contributions of underestimated daytime low-level cloud fraction are partially offset by the opposing contributions from biases in high-/middle-level clouds.Meanwhile,the bias contributions have notable land–ocean differences and region-dependent characteristics,consistent with the model biases in these variables.Additionally,the study estimates the influences of CFR and CDV biases on the bias of shortwave cloud radiative effects.It reveals that the impacts of CDV biases can reach half of those from CFR biases,highlighting the importance of accurate CDV representation in climate models.
文摘Existing reverse-engineering methods struggle to directly generate editable,parametric CAD models from scanned data.To address this limitation,this paper proposes a reverse-modeling approach that reconstructs parametric CAD models from multi-view RGB-D point clouds.Multi-frame point-cloud registration and fusion are first employed to obtain a complete 3-D point cloud of the target object.A region-growing algorithm that jointly exploits color and geometric information segments the cloud,while RANSAC robustly detects and fits basic geometric primitives.These primitives serve as nodes in a graph whose edge features are inferred by a graph neural network to capture spatial constraints.From the detected primitives and their constraints,a high-accuracy,fully editable parametric CAD model is finally exported.Experiments show an average parameter error of 0.3 mm for key dimensions and an overall geometric reconstruction accuracy of 0.35 mm.The work offers an effective technical route toward automated,intelligent 3-D reverse modeling.
基金Guangdong Major Project of Basic and Applied Basic Research(2020B0301030004)National Natural Science Foundation of China(72293604,42275026)Open Grants of the State Key Laboratory of Severe Weather(2023LASW-B09)。
文摘Pronounced climatic differences occur over subtropical South China(SC)and tropical South China Sea(SCS)and understanding the key cloud-radiation characteristics is essential to simulating East Asian climate.This study investigated cloud fractions and cloud radiative effects(CREs)over SC and SCS simulated by CMIP6 atmospheric models.Remarkable differences in cloud-radiation characteristics appeared over these two regions.In observations,considerable amounts of low-middle level clouds and cloud radiative cooling effect appeared over SC.In contrast,high clouds prevailed over SCS,where longwave and shortwave CREs offset each other,resulting in a weaker net cloud radiative effect(NCRE).The models underestimated NCRE over SC mainly due to weaker shortwave CRE and less cloud fractions.Conversely,most models overestimated NCRE over SCS because of stronger shortwave CRE and weaker longwave CRE.Regional CREs were closely linked to their dominant cloud fractions.Both observations and simulations showed a negative spatial correlation between total(low)cloud fraction and shortwave CRE over SC,especially in winter,and exhibited a positive correlation between high cloud fraction and longwave CRE over these two regions.Compared with SCS,most models overestimated the spatial correlation between low(high)cloud fraction and SWCRE(LWCRE)over SC,with larger bias ranges among models,indicating the exaggerated cloud radiative cooling(warming)effect caused by low(high)clouds.Moreover,most models struggled to describe regional ascent and its connection with CREs over SC while they can better reproduce these connections over SCS.This study further suggests that reasonable circulation conditions are crucial to simulating well cloud-radiation characteristics over the East Asian regions.
基金supported by the National Natural Science Foundation of China (Grant No. 12002044)。
文摘DNAN-based insensitive melt-cast explosives have been widely utilized in insensitive munition in recent years. When constrained DNAN-based melt-cast explosives are ignited under thermal stimulation, the base explosive exists in a molten liquid state, where high-temperature gases expand and react in the form of bubble clouds within the liquid explosive;this process is distinctly different from the dynamic crack propagation process observed in the case of solid explosives. In this study, a control model for the reaction evolution of burning-bubble clouds was established to describe the reaction process and quantify the reaction violence of DNAN-based melt-cast explosives, considering the size distribution and activation mechanism of the burning-bubble clouds. The feasibility of the model was verified through experimental results. The results revealed that under geometrically similar conditions, with identical confinement strength and aspect ratio, larger charge structures led to extended initial gas flow and surface burning processes, resulting in greater reaction equivalence and violence at the casing fracture.Under constant charge volume and size, a stronger casing confinement accelerated self-enhanced burning, increasing the internal pressure, reaction degree, and reaction violence. Under a constant casing thickness and radius, higher aspect ratios led to a greater reaction violence at the casing fracture.Moreover, under a constant charge volume and casing thickness, higher aspect ratios resulted in a higher internal pressure, increased reaction degree, and greater reaction violence at the casing fracture. Further,larger ullage volumes extended the reaction evolution time and increased the reaction violence under constant casing dimensions. Through a matching design of the opening threshold of the pressure relief holes and the relief structure area, a stable burning reaction could be maintained until completion,thereby achieving a control of the reaction violence. The proposed model could effectively reflect the effects of the intrinsic burning rate, casing confinement strength, charge size, ullage volume, and pressure relief structure on the reaction evolution process and reaction violence, providing a theoretical method for the thermal safety design and reaction violence evaluation of melt-cast explosives.
基金supported by the National Natural Science Foundation of China(Grant Nos.41941017 and 42177139)Graduate Innovation Fund of Jilin University(Grant No.2024CX099)。
文摘The spatial distribution of discontinuities and the size of rock blocks are the key indicators for rock mass quality evaluation and rockfall risk assessment.Traditional manual measurement is often dangerous or unreachable at some high-steep rock slopes.In contrast,unmanned aerial vehicle(UAV)photogrammetry is not limited by terrain conditions,and can efficiently collect high-precision three-dimensional(3D)point clouds of rock masses through all-round and multiangle photography for rock mass characterization.In this paper,a new method based on a 3D point cloud is proposed for discontinuity identification and refined rock block modeling.The method is based on four steps:(1)Establish a point cloud spatial topology,and calculate the point cloud normal vector and average point spacing based on several machine learning algorithms;(2)Extract discontinuities using the density-based spatial clustering of applications with noise(DBSCAN)algorithm and fit the discontinuity plane by combining principal component analysis(PCA)with the natural breaks(NB)method;(3)Propose a method of inserting points in the line segment to generate an embedded discontinuity point cloud;and(4)Adopt a Poisson reconstruction method for refined rock block modeling.The proposed method was applied to an outcrop of an ultrahigh steep rock slope and compared with the results of previous studies and manual surveys.The results show that the method can eliminate the influence of discontinuity undulations on the orientation measurement and describe the local concave-convex characteristics on the modeling of rock blocks.The calculation results are accurate and reliable,which can meet the practical requirements of engineering.
文摘This paper studies the re-adjusted cross-validation method and a semiparametric regression model called the varying index coefficient model. We use the profile spline modal estimator method to estimate the coefficients of the parameter part of the Varying Index Coefficient Model (VICM), while the unknown function part uses the B-spline to expand. Moreover, we combine the above two estimation methods under the assumption of high-dimensional data. The results of data simulation and empirical analysis show that for the varying index coefficient model, the re-adjusted cross-validation method is better in terms of accuracy and stability than traditional methods based on ordinary least squares.
文摘Model averaging has attracted increasing attention in recent years for the analysis of high-dimensional data. By weighting several competing statistical models suitably, model averaging attempts to achieve stable and improved prediction. To obtain a better understanding of the available model averaging methods, their properties and the relationships between them, this paper is devoted to make a review on some recent progresses in high-dimensional model averaging from the frequentist perspective. Some future research topics are also discussed.
基金Projects(51474252,51274253)supported by the National Natural Science Foundation of ChinaProject(2015CX005)supported by the Innovation Driven Plan of Central South University,ChinaProject(2016zzts095)supported by the Fundamental Research Funds for the Central Universities,China
文摘The method of cloud model with entropy weight was adopted for the prediction of rock burst classification. Some main factors of rock burst including the uniaxial compressive strength (σc), the tensile strength (σt), the tangential stress (σθ), the rock brittleness coefficient (σc/σt), the stress coefficient (σθ /σc) and the elastic energy index (Wet) are chosen to establish evaluation index system. The entropy?cloud model and criterion are obtained through 209 sets of rock burst samples from underground rock projects. The sensitivity of indicators is analyzed and 209 sets of rock burst samples are discriminated by this model. The discriminant results of the entropy-cloud model are compared with those of Bayes, KNN and RF methods. The results show that the sensitivity order of those factors from high to low is σ_θ /σ_c, σ_θ, W_(ct), σ_c/σ_t, σ_t, σ_c, and the entropy-cloud model has higher accuracy than Bayes, K-Nearest Neighbor algorithm (KNN) and Random Forest (RF) methods.
基金The National Natural Science Foundation of China(No.71101014,50679008)Specialized Research Fund for the Doctoral Program of Higher Education(No.200801411105)the Science and Technology Project of the Department of Communications of Henan Province(No.2010D107-4)
文摘Aiming at the real-time fluctuation and nonlinear characteristics of the expressway short-term traffic flow forecasting the parameter projection pursuit regression PPPR model is applied to forecast the expressway traffic flow where the orthogonal Hermite polynomial is used to fit the ridge functions and the least square method is employed to determine the polynomial weight coefficient c.In order to efficiently optimize the projection direction a and the number M of ridge functions of the PPPR model the chaos cloud particle swarm optimization CCPSO algorithm is applied to optimize the parameters. The CCPSO-PPPR hybrid optimization model for expressway short-term traffic flow forecasting is established in which the CCPSO algorithm is used to optimize the optimal projection direction a in the inner layer while the number M of ridge functions is optimized in the outer layer.Traffic volume weather factors and travel date of the previous several time intervals of the road section are taken as the input influencing factors. Example forecasting and model comparison results indicate that the proposed model can obtain a better forecasting effect and its absolute error is controlled within [-6,6] which can meet the application requirements of expressway traffic flow forecasting.
文摘Model reconstruction from points scanned on existing physical objects is much important in a variety of situations such as reverse engineering for mechanical products, computer vision and recovery of biological shapes from two dimensional contours. With the development of measuring equipment, cloud points that contain more details of the object can be obtained conveniently. On the other hand, large quantity of sampled points brings difficulties to model reconstruction method. This paper first presents an algorithm to automatically reduce the number of cloud points under given tolerance. Triangle mesh surface from the simplified data set is reconstructed by the marching cubes algorithm. For various reasons, reconstructed mesh usually contains unwanted holes. An approach to create new triangles is proposed with optimized shape for covering the unexpected holes in triangle meshes. After hole filling, watertight triangle mesh can be directly output in STL format, which is widely used in rapid prototype manufacturing. Practical examples are included to demonstrate the method.
基金National Natural Science Foundation of China(No.61171057,No.61503345)Science Foundation for North University of China(No.110246)+1 种基金Specialized Research Fund for Doctoral Program of Higher Education of China(No.20121420110004)International Office of Shanxi Province Education Department of China,and Basic Research Project in Shanxi Province(Young Foundation)
文摘In order to reduce amount of data storage and improve processing capacity of the system, this paper proposes a new classification method of data source by combining phase synchronization model in network clustering with cloud model. Firstly, taking data source as a complex network, after the topography of network is obtained, the cloud model of each node data is determined by fuzzy analytic hierarchy process (AHP). Secondly, by calculating expectation, entropy and hyper entropy of the cloud model, comprehensive coupling strength is got and then it is regarded as the edge weight of topography. Finally, distribution curve is obtained by iterating the phase of each node by means of phase synchronization model. Thus classification of data source is completed. This method can not only provide convenience for storage, cleaning and compression of data, but also improve the efficiency of data analysis.
基金Supported by the National Basic Research Program of China (973 Program) (2006CB701305, 2007CB310804)the National Natural Science Foundation of China (60743001)+1 种基金Best National Thesis Fund (2005047)the Natural Science Foundation of Hubei Province (CDB132, 2010j0049)
文摘Recommender system is an important content in the research of E-commerce technology. Collaborative filtering recom-mendation algorithm has already been used successfully at recom-mender system. However,with the development of E-commerce,the difficulties of the extreme sparsity of user rating data have become more and more severe. Based on the traditional similarity measuring methods,we introduce the cloud model and combine it with the item-based collaborative filtering recommendation algorithms. The new collaborative filtering recommendation algorithm based on item and cloud model (IC-Based CF) computes the similarity de-gree between items by comparing the statistical characteristic of items. The experimental results show that this method can improve the performance of the present item-based collaborative filtering algorithm with extreme sparsity of data.
基金This study was supported by the Grant-in-Aid for Science Research of the Ministry of Education and Culture, Japan, under the Grant No. 08455232.
文摘The motion of particle clouds formed by dumping dredged material into quiescent waters is experimentally and numerically studied. In the numerical model, the particle phase is modeled by the dispersion model, and turbulence is calculated by the large eddy simulation. The governing equations, including the filtered Navier-Stokes equations and mass transport equation, are solved based on the operator-splitting algorithm and an implicit cubic spline interpolation scheme. The eddy viscosity is evaluated by the modified Smagorinsky model including the buoyancy term. Comparisons of main flow characteristics, including shape, size, average density excess, moving speed and the amount of particles deposited on the bed, between experimental and computational results show that the numerical model well predicts the motion of the cloud from the falling to spreading stage. The effects of silt-fence on the motion of the particle cloud are also investigated.