Phase transitions,as one of the most intriguing phenomena in nature,are divided into first-order phase transitions(FOPTs)and continuous ones in current classification.While the latter shows striking phenomena of scali...Phase transitions,as one of the most intriguing phenomena in nature,are divided into first-order phase transitions(FOPTs)and continuous ones in current classification.While the latter shows striking phenomena of scaling and universality,the former has recently also been demonstrated to exhibit scaling and universal behavior within a mesoscopic,coarse-grained Landau-Ginzburg theory.Here we apply this theory to a microscopic model-the paradigmatic Ising model,which undergoes FOPTs between two ordered phases below its critical temperature-and unambiguously demonstrate universal scaling behavior in such FOPTs.These results open the door for extending the theory to other microscopic FOPT systems and experimentally testing them to systematically uncover their scaling and universal behavior.展开更多
Long-term responses of floating structures pose a great concern in their design phase. Existing approaches for addressing long-term extreme responses are extremely cumbersome for adoption. This work aims to develop an...Long-term responses of floating structures pose a great concern in their design phase. Existing approaches for addressing long-term extreme responses are extremely cumbersome for adoption. This work aims to develop an approach for the long-term extreme-response analysis of floating structures. A modified gradient-based retrieval algorithm in conjunction with the inverse first-order reliability method(IFORM) is proposed to enable the use of convolution models in long-term extreme analysis of structures with an analytical formula of response amplitude operator(RAO). The proposed algorithm ensures convergence stability and iteration accuracy and exhibits a higher computational efficiency than the traditional backtracking method. However, when the RAO of general offshore structures cannot be analytically expressed, the convolutional integration method fails to function properly. A numerical discretization approach is further proposed for offshore structures in the case when the analytical expression of the RAO is not feasible. Through iterative discretization of environmental contours(ECs) and RAOs, a detailed procedure is proposed to calculate the long-term response extremes of offshore structures. The validity and accuracy of the proposed approach are tested using a floating offshore wind turbine as a numerical example. The long-term extreme heave responses of various return periods are calculated via the IFORM in conjunction with a numerical discretization approach. The environmental data corresponding to N-year structural responses are located inside the ECs, which indicates that the selection of design points directly along the ECs yields conservative design results.展开更多
In this paper,we present a novel first-order digitalΣΔconverter tailored for digital-to-analog applications,focusing on achieving both high yield and reduced silicon estate.Our approach incorporates a substantial le...In this paper,we present a novel first-order digitalΣΔconverter tailored for digital-to-analog applications,focusing on achieving both high yield and reduced silicon estate.Our approach incorporates a substantial level of dithering noise into the input signal,strategically aimed at mitigating the spurious frequencies commonly encountered in such converters.Validation of our design is performed through simulations using a high-level simulator specialized in mixed-signal circuit analysis.The results underscore the enhanced performance of our circuit,especially in reducing spurious frequencies,highlighting its efficiency and effectiveness.The final circuit exhibits an effective number of bits of 13.展开更多
Methane generation in landfills and its inadequate management represent the major avoidable source of anthropogenic methane today. This paper models methane production and the potential resources expected (electrical ...Methane generation in landfills and its inadequate management represent the major avoidable source of anthropogenic methane today. This paper models methane production and the potential resources expected (electrical energy production and potential carbon credits from avoided CH4 emissions) from its proper management in a municipal solid waste landfill located in Ouagadougou, Burkina Faso. The modeling was carried out using two first-order decay (FOD) models (LandGEM V3.02 and SWANA) using parameters evaluated on the basis of the characteristics of the waste admitted to the landfill and weather data for the site. At the same time, production data have been collected since 2016 in order to compare them with the model results. The results obtained from these models were compared to experimental one. For the simulation of methane production, the SWANA model showed better consistency with experimental data, with a coefficient of determination (R²) of 0.59 compared with the LandGEM model, which obtained a coefficient of 0.006. Thus, despite the low correlation values linked to the poor consistency of experimental data, the SWANA model models methane production much better than the LandGEM model. Thus, despite the low correlation values linked to the poor consistency of the experimental data, the SWANA model models methane production much better than the LandGEM V3.02 model. It was noted that the poor consistency of the experimental data justifies these low coefficients, and that they can be improved in the future thanks to ongoing in situ measurements. According to the SWANA model prediction, in 27 years of operation a biogas plant with 33% electrical efficiency using biogas from the Polesgo landfill would avoid 1,340 GgCO2e. Also, the evaluation of revenues due to electricity and carbon credit gave a total revenue derived from methane production of US$27.38 million at a cost of US$10.5/tonne CO2e.展开更多
This paper introduces a novel approach for parameter sensitivity evaluation and efficient slope reliability analysis based on quantile-based first-order second-moment method(QFOSM).The core principles of the QFOSM are...This paper introduces a novel approach for parameter sensitivity evaluation and efficient slope reliability analysis based on quantile-based first-order second-moment method(QFOSM).The core principles of the QFOSM are elucidated geometrically from the perspective of expanding ellipsoids.Based on this geometric interpretation,the QFOSM is further extended to estimate sensitivity indices and assess the significance of various uncertain parameters involved in the slope system.The proposed method has the advantage of computational simplicity,akin to the conventional first-order second-moment method(FOSM),while providing estimation accuracy close to that of the first-order reliability method(FORM).Its performance is demonstrated with a numerical example and three slope examples.The results show that the proposed method can efficiently estimate the slope reliability and simultaneously evaluate the sensitivity of the uncertain parameters.The proposed method does not involve complex optimization or iteration required by the FORM.It can provide a valuable complement to the existing approximate reliability analysis methods,offering rapid sensitivity evaluation and slope reliability analysis.展开更多
Investigating natural-inspired applications is a perennially appealing subject for scientists. The current increase in the speed of natural-origin structure growth may be linked to their superior mechanical properties...Investigating natural-inspired applications is a perennially appealing subject for scientists. The current increase in the speed of natural-origin structure growth may be linked to their superior mechanical properties and environmental resilience. Biological composite structures with helicoidal schemes and designs have remarkable capacities to absorb impact energy and withstand damage. However, there is a dearth of extensive study on the influence of fiber redirection and reorientation inside the matrix of a helicoid structure on its mechanical performance and reactivity. The present study aimed to explore the static and transient responses of a bio-inspired helicoid laminated composite(B-iHLC) shell under the influence of an explosive load using an isomorphic method. The structural integrity of the shell is maintained by a viscoelastic basis known as the Pasternak foundation, which encompasses two coefficients of stiffness and one coefficient of damping. The equilibrium equations governing shell dynamics are obtained by using Hamilton's principle and including the modified first-order shear theory,therefore obviating the need to employ a shear correction factor. The paper's model and approach are validated by doing numerical comparisons with respected publications. The findings of this study may be used in the construction of military and civilian infrastructure in situations when the structure is subjected to severe stresses that might potentially result in catastrophic collapse. The findings of this paper serve as the foundation for several other issues, including geometric optimization and the dynamic response of similar mechanical structures.展开更多
The use of peat for the removal of nickel from aqueous solutions has been investigated at various pH values by means of static conditions. The present research shows that the ability of Ni to bind to peat increases as...The use of peat for the removal of nickel from aqueous solutions has been investigated at various pH values by means of static conditions. The present research shows that the ability of Ni to bind to peat increases as the pH value increases. The solutions reach adsorption equilibrium rapidly. A reasonable kinetic model, first-order in nickel concentration, has been developed and fitted to the adsorption of nickel (Ⅱ) onto peat. The first-order model provides a good correlation to the experimental data. The characteristic parameters of the Langmuir isotherm were determined at various temperatures. The relationship between kinetics and equilibrium isotherms was established through the forward- and backward-rate-constants, k~ and k2, and the equilibrium constant, K.展开更多
In this article, we use the spin coherent state transformation and the ground state variational method to theoretically calculate the ground function. In order to consider the influence of the atom-atom interaction on...In this article, we use the spin coherent state transformation and the ground state variational method to theoretically calculate the ground function. In order to consider the influence of the atom-atom interaction on the extended Dicke model's ground state properties, the mean photon number, the scaled atomic population and the average ground energy are displayed. Using the self-consistent field theory to solve the atom-atom interaction, we discover the system undergoes a first-order quantum phase transition from the normal phase to the superradiant phase, but a famous Dicke-type second-order quantum phase transition without the atom-atom interaction. Meanwhile, the atom-atom interaction makes the phase transition point shift to the lower atom-photon collective coupling strength.展开更多
Using Euler’s first-order explicit(EE)method and the peridynamic differential operator(PDDO)to discretize the time and internal crystal-size derivatives,respectively,the Euler’s first-order explicit method–peridyna...Using Euler’s first-order explicit(EE)method and the peridynamic differential operator(PDDO)to discretize the time and internal crystal-size derivatives,respectively,the Euler’s first-order explicit method–peridynamic differential operator(EE–PDDO)was obtained for solving the one-dimensional population balance equation in crystallization.Four different conditions during crystallization were studied:size-independent growth,sizedependent growth in a batch process,nucleation and size-independent growth,and nucleation and size-dependent growth in a continuous process.The high accuracy of the EE–PDDO method was confirmed by comparing it with the numerical results obtained using the second-order upwind and HR-van methods.The method is characterized by non-oscillation and high accuracy,especially in the discontinuous and sharp crystal size distribution.The stability of the EE–PDDO method,choice of weight function in the PDDO method,and optimal time step are also discussed.展开更多
This study was performed in two phases of work.In the first stage,four conventional first-order flotation kinetics models were fitted to the measured recoveries data and the best model were selected.In the second stag...This study was performed in two phases of work.In the first stage,four conventional first-order flotation kinetics models were fitted to the measured recoveries data and the best model were selected.In the second stage,influence of pH,solid concentration,water chemistry and the amount of collector dosage were investigated on kinetics parameters including flotation rate constant and ultimate recovery.The results indicated that that perfectly mixed reactor model and Kelsall model gave the best and the weakest fit to the experimental data,respectively.It was observed that flotation rate constant and ultimate recovery were strongly affected by chemical factors investigated especially water quality.The flotation rate constant decreased with increasing the solids content,while ultimate recovery increased to certain value and thereafter reduced.It was also found that the most values of flotation rate constant and ultimate recovery obtained in dosage of collector are 30 and 40 g/t,respectively.展开更多
BACKGROUND Rebleeding after recovery from esophagogastric variceal bleeding(EGVB)is a severe complication that is associated with high rates of both incidence and mortality.Despite its clinical importance,recognized p...BACKGROUND Rebleeding after recovery from esophagogastric variceal bleeding(EGVB)is a severe complication that is associated with high rates of both incidence and mortality.Despite its clinical importance,recognized prognostic models that can effectively predict esophagogastric variceal rebleeding in patients with liver cirrhosis are lacking.AIM To construct and externally validate a reliable prognostic model for predicting the occurrence of esophagogastric variceal rebleeding.METHODS This study included 477 EGVB patients across 2 cohorts:The derivation cohort(n=322)and the validation cohort(n=155).The primary outcome was rebleeding events within 1 year.The least absolute shrinkage and selection operator was applied for predictor selection,and multivariate Cox regression analysis was used to construct the prognostic model.Internal validation was performed with bootstrap resampling.We assessed the discrimination,calibration and accuracy of the model,and performed patient risk stratification.RESULTS Six predictors,including albumin and aspartate aminotransferase concentrations,white blood cell count,and the presence of ascites,portal vein thrombosis,and bleeding signs,were selected for the rebleeding event prediction following endoscopic treatment(REPET)model.In predicting rebleeding within 1 year,the REPET model ex-hibited a concordance index of 0.775 and a Brier score of 0.143 in the derivation cohort,alongside 0.862 and 0.127 in the validation cohort.Furthermore,the REPET model revealed a significant difference in rebleeding rates(P<0.01)between low-risk patients and intermediate-to high-risk patients in both cohorts.CONCLUSION We constructed and validated a new prognostic model for variceal rebleeding with excellent predictive per-formance,which will improve the clinical management of rebleeding in EGVB patients.展开更多
In this paper, we not only construct the confidence region for parameters in a mixed integer-valued autoregressive process using the empirical likelihood method, but also establish the empirical log-likelihood ratio s...In this paper, we not only construct the confidence region for parameters in a mixed integer-valued autoregressive process using the empirical likelihood method, but also establish the empirical log-likelihood ratio statistic and obtain its limiting distribution. And then, via simulation studies we give coverage probabilities for the parameters of interest. The results show that the empirical likelihood method performs very well.展开更多
This study was aimed to prepare landslide susceptibility maps for the Pithoragarh district in Uttarakhand,India,using advanced ensemble models that combined Radial Basis Function Networks(RBFN)with three ensemble lear...This study was aimed to prepare landslide susceptibility maps for the Pithoragarh district in Uttarakhand,India,using advanced ensemble models that combined Radial Basis Function Networks(RBFN)with three ensemble learning techniques:DAGGING(DG),MULTIBOOST(MB),and ADABOOST(AB).This combination resulted in three distinct ensemble models:DG-RBFN,MB-RBFN,and AB-RBFN.Additionally,a traditional weighted method,Information Value(IV),and a benchmark machine learning(ML)model,Multilayer Perceptron Neural Network(MLP),were employed for comparison and validation.The models were developed using ten landslide conditioning factors,which included slope,aspect,elevation,curvature,land cover,geomorphology,overburden depth,lithology,distance to rivers and distance to roads.These factors were instrumental in predicting the output variable,which was the probability of landslide occurrence.Statistical analysis of the models’performance indicated that the DG-RBFN model,with an Area Under ROC Curve(AUC)of 0.931,outperformed the other models.The AB-RBFN model achieved an AUC of 0.929,the MB-RBFN model had an AUC of 0.913,and the MLP model recorded an AUC of 0.926.These results suggest that the advanced ensemble ML model DG-RBFN was more accurate than traditional statistical model,single MLP model,and other ensemble models in preparing trustworthy landslide susceptibility maps,thereby enhancing land use planning and decision-making.展开更多
Conducting predictability studies is essential for tracing the source of forecast errors,which not only leads to the improvement of observation and forecasting systems,but also enhances the understanding of weather an...Conducting predictability studies is essential for tracing the source of forecast errors,which not only leads to the improvement of observation and forecasting systems,but also enhances the understanding of weather and climate phenomena.In the past few decades,dynamical numerical models have been the primary tools for predictability studies,achieving significant progress.Nowadays,with the advances in artificial intelligence(AI)techniques and accumulations of vast meteorological data,modeling weather and climate events using modern data-driven approaches is becoming trendy,where FourCastNet,Pangu-Weather,and GraphCast are successful pioneers.In this perspective article,we suggest AI models should not be limited to forecasting but be expanded to predictability studies,leveraging AI's advantages of high efficiency and self-contained optimization modules.To this end,we first remark that AI models should possess high simulation capability with fine spatiotemporal resolution for two kinds of predictability studies.AI models with high simulation capabilities comparable to numerical models can be considered to provide solutions to partial differential equations in a data-driven way.Then,we highlight several specific predictability issues with well-determined nonlinear optimization formulizations,which can be well-studied using AI models,holding significant scientific value.In addition,we advocate for the incorporation of AI models into the synergistic cycle of the cognition–observation–model paradigm.Comprehensive predictability studies have the potential to transform“big data”to“big and better data”and shift the focus from“AI for forecasts”to“AI for science”,ultimately advancing the development of the atmospheric and oceanic sciences.展开更多
With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration wi...With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration within urban spaces and serve as testbeds for exploring smart city planning and governance models.Information models facilitate the effective integration of technology into space.Building Information Modeling(BIM)and City Information Modeling(CIM)have been widely used in urban construction.However,the existing information models have limitations in the application of the park,so it is necessary to develop an information model suitable for the park.This paper first traces the evolution of park smart transformation,reviews the global landscape of smart park development,and identifies key trends and persistent challenges.Addressing the particularities of parks,the concept of Park Information Modeling(PIM)is proposed.PIM leverages smart technologies such as artificial intelligence,digital twins,and collaborative sensing to help form a‘space-technology-system’smart structure,enabling systematic management of diverse park spaces,addressing the deficiency in park-level information models,and aiming to achieve scale articulation between BIM and CIM.Finally,through a detailed top-level design application case study of the Nanjing Smart Education Park in China,this paper illustrates the translation process of the PIM concept into practice,showcasing its potential to provide smart management tools for park managers and enhance services for park stakeholders,although further empirical validation is required.展开更多
To examine the similarities and differences in the evolution of cavity,wetting and dynamics of a highspeed,oblique water-entry projectile with different positive angles of attack,a comparative analysis has been conduc...To examine the similarities and differences in the evolution of cavity,wetting and dynamics of a highspeed,oblique water-entry projectile with different positive angles of attack,a comparative analysis has been conducted based on the numerical results of two mathematical models,the rigid-body model and fluid-structure interaction model.In addition,the applicable scope of the above two methods,and the structural response characteristics of the projectile have also been investigated.Our results demonstrate that:(1) The impact loads and angular motion of the projectile of the rigid-body method are more likely to exhibit periodic variations due to the periodic tail slap,its range of positive angles of attack is about α<2°.(2) When the projectile undergone significant wetting,a strong coupling effect is observed among wetting,structural deformation,and projectile motion.With the applied projectile shape,it is observed that,when the projectile bends,the final wetting position is that of Part B(cylinder of body).With the occu rrence of this phenomenon,the projectile ballistics beco me completely unstable.(3) The force exerted on the lower surface of the projectile induced by wetting is the primary reason of the destabilization of the projectile traj ectory and structu ral deformation failure.Bending deformation is most likely to appear at the junction of Part C(cone of body) and Part D(tail).The safe angles of attack of the projectile stability are found to be about α≤2°.展开更多
We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpr...We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.展开更多
Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and langua...Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction.展开更多
基金supported by the National Natural Science Foundation of China(Grant No.12175316).
文摘Phase transitions,as one of the most intriguing phenomena in nature,are divided into first-order phase transitions(FOPTs)and continuous ones in current classification.While the latter shows striking phenomena of scaling and universality,the former has recently also been demonstrated to exhibit scaling and universal behavior within a mesoscopic,coarse-grained Landau-Ginzburg theory.Here we apply this theory to a microscopic model-the paradigmatic Ising model,which undergoes FOPTs between two ordered phases below its critical temperature-and unambiguously demonstrate universal scaling behavior in such FOPTs.These results open the door for extending the theory to other microscopic FOPT systems and experimentally testing them to systematically uncover their scaling and universal behavior.
基金Supported by the National Natural Science Foundation of China (Grant Nos.52088102 and 51879287)National Key Research and Development Program of China (Grant No.2022YFB2602301)。
文摘Long-term responses of floating structures pose a great concern in their design phase. Existing approaches for addressing long-term extreme responses are extremely cumbersome for adoption. This work aims to develop an approach for the long-term extreme-response analysis of floating structures. A modified gradient-based retrieval algorithm in conjunction with the inverse first-order reliability method(IFORM) is proposed to enable the use of convolution models in long-term extreme analysis of structures with an analytical formula of response amplitude operator(RAO). The proposed algorithm ensures convergence stability and iteration accuracy and exhibits a higher computational efficiency than the traditional backtracking method. However, when the RAO of general offshore structures cannot be analytically expressed, the convolutional integration method fails to function properly. A numerical discretization approach is further proposed for offshore structures in the case when the analytical expression of the RAO is not feasible. Through iterative discretization of environmental contours(ECs) and RAOs, a detailed procedure is proposed to calculate the long-term response extremes of offshore structures. The validity and accuracy of the proposed approach are tested using a floating offshore wind turbine as a numerical example. The long-term extreme heave responses of various return periods are calculated via the IFORM in conjunction with a numerical discretization approach. The environmental data corresponding to N-year structural responses are located inside the ECs, which indicates that the selection of design points directly along the ECs yields conservative design results.
文摘In this paper,we present a novel first-order digitalΣΔconverter tailored for digital-to-analog applications,focusing on achieving both high yield and reduced silicon estate.Our approach incorporates a substantial level of dithering noise into the input signal,strategically aimed at mitigating the spurious frequencies commonly encountered in such converters.Validation of our design is performed through simulations using a high-level simulator specialized in mixed-signal circuit analysis.The results underscore the enhanced performance of our circuit,especially in reducing spurious frequencies,highlighting its efficiency and effectiveness.The final circuit exhibits an effective number of bits of 13.
文摘Methane generation in landfills and its inadequate management represent the major avoidable source of anthropogenic methane today. This paper models methane production and the potential resources expected (electrical energy production and potential carbon credits from avoided CH4 emissions) from its proper management in a municipal solid waste landfill located in Ouagadougou, Burkina Faso. The modeling was carried out using two first-order decay (FOD) models (LandGEM V3.02 and SWANA) using parameters evaluated on the basis of the characteristics of the waste admitted to the landfill and weather data for the site. At the same time, production data have been collected since 2016 in order to compare them with the model results. The results obtained from these models were compared to experimental one. For the simulation of methane production, the SWANA model showed better consistency with experimental data, with a coefficient of determination (R²) of 0.59 compared with the LandGEM model, which obtained a coefficient of 0.006. Thus, despite the low correlation values linked to the poor consistency of experimental data, the SWANA model models methane production much better than the LandGEM model. Thus, despite the low correlation values linked to the poor consistency of the experimental data, the SWANA model models methane production much better than the LandGEM V3.02 model. It was noted that the poor consistency of the experimental data justifies these low coefficients, and that they can be improved in the future thanks to ongoing in situ measurements. According to the SWANA model prediction, in 27 years of operation a biogas plant with 33% electrical efficiency using biogas from the Polesgo landfill would avoid 1,340 GgCO2e. Also, the evaluation of revenues due to electricity and carbon credit gave a total revenue derived from methane production of US$27.38 million at a cost of US$10.5/tonne CO2e.
基金supported by the National Natural Science Foundation of China(Grant Nos.52109144,52025094 and 52222905).
文摘This paper introduces a novel approach for parameter sensitivity evaluation and efficient slope reliability analysis based on quantile-based first-order second-moment method(QFOSM).The core principles of the QFOSM are elucidated geometrically from the perspective of expanding ellipsoids.Based on this geometric interpretation,the QFOSM is further extended to estimate sensitivity indices and assess the significance of various uncertain parameters involved in the slope system.The proposed method has the advantage of computational simplicity,akin to the conventional first-order second-moment method(FOSM),while providing estimation accuracy close to that of the first-order reliability method(FORM).Its performance is demonstrated with a numerical example and three slope examples.The results show that the proposed method can efficiently estimate the slope reliability and simultaneously evaluate the sensitivity of the uncertain parameters.The proposed method does not involve complex optimization or iteration required by the FORM.It can provide a valuable complement to the existing approximate reliability analysis methods,offering rapid sensitivity evaluation and slope reliability analysis.
文摘Investigating natural-inspired applications is a perennially appealing subject for scientists. The current increase in the speed of natural-origin structure growth may be linked to their superior mechanical properties and environmental resilience. Biological composite structures with helicoidal schemes and designs have remarkable capacities to absorb impact energy and withstand damage. However, there is a dearth of extensive study on the influence of fiber redirection and reorientation inside the matrix of a helicoid structure on its mechanical performance and reactivity. The present study aimed to explore the static and transient responses of a bio-inspired helicoid laminated composite(B-iHLC) shell under the influence of an explosive load using an isomorphic method. The structural integrity of the shell is maintained by a viscoelastic basis known as the Pasternak foundation, which encompasses two coefficients of stiffness and one coefficient of damping. The equilibrium equations governing shell dynamics are obtained by using Hamilton's principle and including the modified first-order shear theory,therefore obviating the need to employ a shear correction factor. The paper's model and approach are validated by doing numerical comparisons with respected publications. The findings of this study may be used in the construction of military and civilian infrastructure in situations when the structure is subjected to severe stresses that might potentially result in catastrophic collapse. The findings of this paper serve as the foundation for several other issues, including geometric optimization and the dynamic response of similar mechanical structures.
基金Projects [2006]331 supported by the Scientific Research Foundation for the Returned Overseas Chinese Scholars070712 by the Key Laboratory ofNuclear Resources and Environment,Ministry of Education of China
文摘The use of peat for the removal of nickel from aqueous solutions has been investigated at various pH values by means of static conditions. The present research shows that the ability of Ni to bind to peat increases as the pH value increases. The solutions reach adsorption equilibrium rapidly. A reasonable kinetic model, first-order in nickel concentration, has been developed and fitted to the adsorption of nickel (Ⅱ) onto peat. The first-order model provides a good correlation to the experimental data. The characteristic parameters of the Langmuir isotherm were determined at various temperatures. The relationship between kinetics and equilibrium isotherms was established through the forward- and backward-rate-constants, k~ and k2, and the equilibrium constant, K.
基金Supported by the National Natural Science Foundation of China under Grant Nos.11275118,11404198,91430109,61505100,51502189the Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi Province(STIP)under Grant No.2014102+2 种基金the Launch of the Scientific Research of Shanxi University under Grant No.011151801004the National Fundamental Fund of Personnel Training under Grant No.J1103210The Natural Science Foundation of Shanxi Province under Grant No.2015011008
文摘In this article, we use the spin coherent state transformation and the ground state variational method to theoretically calculate the ground function. In order to consider the influence of the atom-atom interaction on the extended Dicke model's ground state properties, the mean photon number, the scaled atomic population and the average ground energy are displayed. Using the self-consistent field theory to solve the atom-atom interaction, we discover the system undergoes a first-order quantum phase transition from the normal phase to the superradiant phase, but a famous Dicke-type second-order quantum phase transition without the atom-atom interaction. Meanwhile, the atom-atom interaction makes the phase transition point shift to the lower atom-photon collective coupling strength.
文摘Using Euler’s first-order explicit(EE)method and the peridynamic differential operator(PDDO)to discretize the time and internal crystal-size derivatives,respectively,the Euler’s first-order explicit method–peridynamic differential operator(EE–PDDO)was obtained for solving the one-dimensional population balance equation in crystallization.Four different conditions during crystallization were studied:size-independent growth,sizedependent growth in a batch process,nucleation and size-independent growth,and nucleation and size-dependent growth in a continuous process.The high accuracy of the EE–PDDO method was confirmed by comparing it with the numerical results obtained using the second-order upwind and HR-van methods.The method is characterized by non-oscillation and high accuracy,especially in the discontinuous and sharp crystal size distribution.The stability of the EE–PDDO method,choice of weight function in the PDDO method,and optimal time step are also discussed.
文摘This study was performed in two phases of work.In the first stage,four conventional first-order flotation kinetics models were fitted to the measured recoveries data and the best model were selected.In the second stage,influence of pH,solid concentration,water chemistry and the amount of collector dosage were investigated on kinetics parameters including flotation rate constant and ultimate recovery.The results indicated that that perfectly mixed reactor model and Kelsall model gave the best and the weakest fit to the experimental data,respectively.It was observed that flotation rate constant and ultimate recovery were strongly affected by chemical factors investigated especially water quality.The flotation rate constant decreased with increasing the solids content,while ultimate recovery increased to certain value and thereafter reduced.It was also found that the most values of flotation rate constant and ultimate recovery obtained in dosage of collector are 30 and 40 g/t,respectively.
基金Supported by National Natural Science Foundation of China,No.81874390 and No.81573948Shanghai Natural Science Foundation,No.21ZR1464100+1 种基金Science and Technology Innovation Action Plan of Shanghai Science and Technology Commission,No.22S11901700the Shanghai Key Specialty of Traditional Chinese Clinical Medicine,No.shslczdzk01201.
文摘BACKGROUND Rebleeding after recovery from esophagogastric variceal bleeding(EGVB)is a severe complication that is associated with high rates of both incidence and mortality.Despite its clinical importance,recognized prognostic models that can effectively predict esophagogastric variceal rebleeding in patients with liver cirrhosis are lacking.AIM To construct and externally validate a reliable prognostic model for predicting the occurrence of esophagogastric variceal rebleeding.METHODS This study included 477 EGVB patients across 2 cohorts:The derivation cohort(n=322)and the validation cohort(n=155).The primary outcome was rebleeding events within 1 year.The least absolute shrinkage and selection operator was applied for predictor selection,and multivariate Cox regression analysis was used to construct the prognostic model.Internal validation was performed with bootstrap resampling.We assessed the discrimination,calibration and accuracy of the model,and performed patient risk stratification.RESULTS Six predictors,including albumin and aspartate aminotransferase concentrations,white blood cell count,and the presence of ascites,portal vein thrombosis,and bleeding signs,were selected for the rebleeding event prediction following endoscopic treatment(REPET)model.In predicting rebleeding within 1 year,the REPET model ex-hibited a concordance index of 0.775 and a Brier score of 0.143 in the derivation cohort,alongside 0.862 and 0.127 in the validation cohort.Furthermore,the REPET model revealed a significant difference in rebleeding rates(P<0.01)between low-risk patients and intermediate-to high-risk patients in both cohorts.CONCLUSION We constructed and validated a new prognostic model for variceal rebleeding with excellent predictive per-formance,which will improve the clinical management of rebleeding in EGVB patients.
基金Supported by National Natural Science Foundation of China(11731015,11571051,J1310022,11501241)Natural Science Foundation of Jilin Province(20150520053JH,20170101057JC,20180101216JC)+2 种基金Program for Changbaishan Scholars of Jilin Province(2015010)Science and Technology Program of Jilin Educational Department during the "13th Five-Year" Plan Period(2016-399)Science and Technology Research Program of Education Department in Jilin Province for the 13th Five-Year Plan(2016213)
文摘In this paper, we not only construct the confidence region for parameters in a mixed integer-valued autoregressive process using the empirical likelihood method, but also establish the empirical log-likelihood ratio statistic and obtain its limiting distribution. And then, via simulation studies we give coverage probabilities for the parameters of interest. The results show that the empirical likelihood method performs very well.
基金the University of Transport Technology under the project entitled“Application of Machine Learning Algorithms in Landslide Susceptibility Mapping in Mountainous Areas”with grant number DTTD2022-16.
文摘This study was aimed to prepare landslide susceptibility maps for the Pithoragarh district in Uttarakhand,India,using advanced ensemble models that combined Radial Basis Function Networks(RBFN)with three ensemble learning techniques:DAGGING(DG),MULTIBOOST(MB),and ADABOOST(AB).This combination resulted in three distinct ensemble models:DG-RBFN,MB-RBFN,and AB-RBFN.Additionally,a traditional weighted method,Information Value(IV),and a benchmark machine learning(ML)model,Multilayer Perceptron Neural Network(MLP),were employed for comparison and validation.The models were developed using ten landslide conditioning factors,which included slope,aspect,elevation,curvature,land cover,geomorphology,overburden depth,lithology,distance to rivers and distance to roads.These factors were instrumental in predicting the output variable,which was the probability of landslide occurrence.Statistical analysis of the models’performance indicated that the DG-RBFN model,with an Area Under ROC Curve(AUC)of 0.931,outperformed the other models.The AB-RBFN model achieved an AUC of 0.929,the MB-RBFN model had an AUC of 0.913,and the MLP model recorded an AUC of 0.926.These results suggest that the advanced ensemble ML model DG-RBFN was more accurate than traditional statistical model,single MLP model,and other ensemble models in preparing trustworthy landslide susceptibility maps,thereby enhancing land use planning and decision-making.
基金in part supported by the National Natural Science Foundation of China(Grant Nos.42288101,42405147 and 42475054)in part by the China National Postdoctoral Program for Innovative Talents(Grant No.BX20230071)。
文摘Conducting predictability studies is essential for tracing the source of forecast errors,which not only leads to the improvement of observation and forecasting systems,but also enhances the understanding of weather and climate phenomena.In the past few decades,dynamical numerical models have been the primary tools for predictability studies,achieving significant progress.Nowadays,with the advances in artificial intelligence(AI)techniques and accumulations of vast meteorological data,modeling weather and climate events using modern data-driven approaches is becoming trendy,where FourCastNet,Pangu-Weather,and GraphCast are successful pioneers.In this perspective article,we suggest AI models should not be limited to forecasting but be expanded to predictability studies,leveraging AI's advantages of high efficiency and self-contained optimization modules.To this end,we first remark that AI models should possess high simulation capability with fine spatiotemporal resolution for two kinds of predictability studies.AI models with high simulation capabilities comparable to numerical models can be considered to provide solutions to partial differential equations in a data-driven way.Then,we highlight several specific predictability issues with well-determined nonlinear optimization formulizations,which can be well-studied using AI models,holding significant scientific value.In addition,we advocate for the incorporation of AI models into the synergistic cycle of the cognition–observation–model paradigm.Comprehensive predictability studies have the potential to transform“big data”to“big and better data”and shift the focus from“AI for forecasts”to“AI for science”,ultimately advancing the development of the atmospheric and oceanic sciences.
基金Under the auspices of National Natural Science Foundation of China(No.42330510)。
文摘With the development of smart cities and smart technologies,parks,as functional units of the city,are facing smart transformation.The development of smart parks can help address challenges of technology integration within urban spaces and serve as testbeds for exploring smart city planning and governance models.Information models facilitate the effective integration of technology into space.Building Information Modeling(BIM)and City Information Modeling(CIM)have been widely used in urban construction.However,the existing information models have limitations in the application of the park,so it is necessary to develop an information model suitable for the park.This paper first traces the evolution of park smart transformation,reviews the global landscape of smart park development,and identifies key trends and persistent challenges.Addressing the particularities of parks,the concept of Park Information Modeling(PIM)is proposed.PIM leverages smart technologies such as artificial intelligence,digital twins,and collaborative sensing to help form a‘space-technology-system’smart structure,enabling systematic management of diverse park spaces,addressing the deficiency in park-level information models,and aiming to achieve scale articulation between BIM and CIM.Finally,through a detailed top-level design application case study of the Nanjing Smart Education Park in China,this paper illustrates the translation process of the PIM concept into practice,showcasing its potential to provide smart management tools for park managers and enhance services for park stakeholders,although further empirical validation is required.
基金supported by the Postgraduate Research&Practice Innovation Program of Jiangsu Province(Grant No.KYCX24_0714).
文摘To examine the similarities and differences in the evolution of cavity,wetting and dynamics of a highspeed,oblique water-entry projectile with different positive angles of attack,a comparative analysis has been conducted based on the numerical results of two mathematical models,the rigid-body model and fluid-structure interaction model.In addition,the applicable scope of the above two methods,and the structural response characteristics of the projectile have also been investigated.Our results demonstrate that:(1) The impact loads and angular motion of the projectile of the rigid-body method are more likely to exhibit periodic variations due to the periodic tail slap,its range of positive angles of attack is about α<2°.(2) When the projectile undergone significant wetting,a strong coupling effect is observed among wetting,structural deformation,and projectile motion.With the applied projectile shape,it is observed that,when the projectile bends,the final wetting position is that of Part B(cylinder of body).With the occu rrence of this phenomenon,the projectile ballistics beco me completely unstable.(3) The force exerted on the lower surface of the projectile induced by wetting is the primary reason of the destabilization of the projectile traj ectory and structu ral deformation failure.Bending deformation is most likely to appear at the junction of Part C(cone of body) and Part D(tail).The safe angles of attack of the projectile stability are found to be about α≤2°.
基金supported by National Key Research and Development Program (2019YFA0708301)National Natural Science Foundation of China (51974337)+2 种基金the Strategic Cooperation Projects of CNPC and CUPB (ZLZX2020-03)Science and Technology Innovation Fund of CNPC (2021DQ02-0403)Open Fund of Petroleum Exploration and Development Research Institute of CNPC (2022-KFKT-09)
文摘We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.
基金supported by National Natural Science Foundation of China(62376219 and 62006194)Foundational Research Project in Specialized Discipline(Grant No.G2024WD0146)Faculty Construction Project(Grant No.24GH0201148).
文摘Large language models(LLMs)have undergone significant expansion and have been increasingly integrated across various domains.Notably,in the realm of robot task planning,LLMs harness their advanced reasoning and language comprehension capabilities to formulate precise and efficient action plans based on natural language instructions.However,for embodied tasks,where robots interact with complex environments,textonly LLMs often face challenges due to a lack of compatibility with robotic visual perception.This study provides a comprehensive overview of the emerging integration of LLMs and multimodal LLMs into various robotic tasks.Additionally,we propose a framework that utilizes multimodal GPT-4V to enhance embodied task planning through the combination of natural language instructions and robot visual perceptions.Our results,based on diverse datasets,indicate that GPT-4V effectively enhances robot performance in embodied tasks.This extensive survey and evaluation of LLMs and multimodal LLMs across a variety of robotic tasks enriches the understanding of LLM-centric embodied intelligence and provides forward-looking insights towards bridging the gap in Human-Robot-Environment interaction.