This study presents a data-driven approach to predict tailplane aerodynamics in icing conditions,supporting the ice-tolerant design of aircraft horizontal stabilizers.The core of this work is a low-cost predictive mod...This study presents a data-driven approach to predict tailplane aerodynamics in icing conditions,supporting the ice-tolerant design of aircraft horizontal stabilizers.The core of this work is a low-cost predictive model for analyzing icing effects on swept tailplanes.The method relies on a multi-fidelity data gathering campaign,enabling seamless integration into multidisciplinary aircraft design workflows.A dataset of iced airfoil shapes was generated using 2D inviscid methods across various flight conditions.High-fidelity CFD simulations were conducted on both clean and iced geometries,forming a multidimensional aerodynamic database.This 2D database feeds a nonlinear vortex lattice method to estimate 3D aerodynamic characteristics,following a'quasi-3D'approach.The resulting reduced-order model delivers fast aerodynamic performance estimates of iced tailplanes.To demonstrate its effectiveness,optimal ice-tolerant tailplane designs were selected from a range of feasible shapes based on a reference transport aircraft.The analysis validates the model's reliability,accuracy,and limitations concerning 3D ice shapes and aerodynamic characteristics.Most notably,the model offers near-zero computational cost compared to high-fidelity simulations,making it a valuable tool for efficient aircraft design.展开更多
A corrosion defect is recognized as one of the most severe phenomena for high-pressure pipelines,especially those served for a long time.Finite-element method and empirical formulas are thereby used for the strength p...A corrosion defect is recognized as one of the most severe phenomena for high-pressure pipelines,especially those served for a long time.Finite-element method and empirical formulas are thereby used for the strength prediction of such pipes with corrosion.However,it is time-consuming for finite-element method and there is a limited application range by using empirical formulas.In order to improve the prediction of strength,this paper investigates the burst pressure of line pipelines with a single corrosion defect subjected to internal pressure based on data-driven methods.Three supervised ML(machine learning)algorithms,including the ANN(artificial neural network),the SVM(support vector machine)and the LR(linear regression),are deployed to train models based on experimental data.Data analysis is first conducted to determine proper pipe features for training.Hyperparameter tuning to control the learning process is then performed to fit the best strength models for corroded pipelines.Among all the proposed data-driven models,the ANN model with three neural layers has the highest training accuracy,but also presents the largest variance.The SVM model provides both high training accuracy and high validation accuracy.The LR model has the best performance in terms of generalization ability.These models can be served as surrogate models by transfer learning with new coming data in future research,facilitating a sustainable and intelligent decision-making of corroded pipelines.展开更多
Cable-stayed bridges have been widely used in high-speed railway infrastructure.The accurate determination of cable’s representative temperatures is vital during the intricate processes of design,construction,and mai...Cable-stayed bridges have been widely used in high-speed railway infrastructure.The accurate determination of cable’s representative temperatures is vital during the intricate processes of design,construction,and maintenance of cable-stayed bridges.However,the representative temperatures of stayed cables are not specified in the existing design codes.To address this issue,this study investigates the distribution of the cable temperature and determinates its representative temperature.First,an experimental investigation,spanning over a period of one year,was carried out near the bridge site to obtain the temperature data.According to the statistical analysis of the measured data,it reveals that the temperature distribution is generally uniform along the cable cross-section without significant temperature gradient.Then,based on the limited data,the Monte Carlo,the gradient boosted regression trees(GBRT),and univariate linear regression(ULR)methods are employed to predict the cable’s representative temperature throughout the service life.These methods effectively overcome the limitations of insufficient monitoring data and accurately predict the representative temperature of the cables.However,each method has its own advantages and limitations in terms of applicability and accuracy.A comprehensive evaluation of the performance of these methods is conducted,and practical recommendations are provided for their application.The proposed methods and representative temperatures provide a good basis for the operation and maintenance of in-service long-span cable-stayed bridges.展开更多
When assessing seismic liquefaction potential with data-driven models,addressing the uncertainties of establishing models,interpreting cone penetration tests(CPT)data and decision threshold is crucial for avoiding bia...When assessing seismic liquefaction potential with data-driven models,addressing the uncertainties of establishing models,interpreting cone penetration tests(CPT)data and decision threshold is crucial for avoiding biased data selection,ameliorating overconfident models,and being flexible to varying practical objectives,especially when the training and testing data are not identically distributed.A workflow characterized by leveraging Bayesian methodology was proposed to address these issues.Employing a Multi-Layer Perceptron(MLP)as the foundational model,this approach was benchmarked against empirical methods and advanced algorithms for its efficacy in simplicity,accuracy,and resistance to overfitting.The analysis revealed that,while MLP models optimized via maximum a posteriori algorithm suffices for straightforward scenarios,Bayesian neural networks showed great potential for preventing overfitting.Additionally,integrating decision thresholds through various evaluative principles offers insights for challenging decisions.Two case studies demonstrate the framework's capacity for nuanced interpretation of in situ data,employing a model committee for a detailed evaluation of liquefaction potential via Monte Carlo simulations and basic statistics.Overall,the proposed step-by-step workflow for analyzing seismic liquefaction incorporates multifold testing and real-world data validation,showing improved robustness against overfitting and greater versatility in addressing practical challenges.This research contributes to the seismic liquefaction assessment field by providing a structured,adaptable methodology for accurate and reliable analysis.展开更多
Hydraulic fracturing technology has achieved remarkable results in improving the production of tight gas reservoirs,but its effectiveness is under the joint action of multiple factors of complexity.Traditional analysi...Hydraulic fracturing technology has achieved remarkable results in improving the production of tight gas reservoirs,but its effectiveness is under the joint action of multiple factors of complexity.Traditional analysis methods have limitations in dealing with these complex and interrelated factors,and it is difficult to fully reveal the actual contribution of each factor to the production.Machine learning-based methods explore the complex mapping relationships between large amounts of data to provide datadriven insights into the key factors driving production.In this study,a data-driven PCA-RF-VIM(Principal Component Analysis-Random Forest-Variable Importance Measures)approach of analyzing the importance of features is proposed to identify the key factors driving post-fracturing production.Four types of parameters,including log parameters,geological and reservoir physical parameters,hydraulic fracturing design parameters,and reservoir stimulation parameters,were inputted into the PCA-RF-VIM model.The model was trained using 6-fold cross-validation and grid search,and the relative importance ranking of each factor was finally obtained.In order to verify the validity of the PCA-RF-VIM model,a consolidation model that uses three other independent data-driven methods(Pearson correlation coefficient,RF feature significance analysis method,and XGboost feature significance analysis method)are applied to compare with the PCA-RF-VIM model.A comparison the two models shows that they contain almost the same parameters in the top ten,with only minor differences in one parameter.In combination with the reservoir characteristics,the reasonableness of the PCA-RF-VIM model is verified,and the importance ranking of the parameters by this method is more consistent with the reservoir characteristics of the study area.Ultimately,the ten parameters are selected as the controlling factors that have the potential to influence post-fracturing gas production,as the combined importance of these top ten parameters is 91.95%on driving natural gas production.Analyzing and obtaining these ten controlling factors provides engineers with a new insight into the reservoir selection for fracturing stimulation and fracturing parameter optimization to improve fracturing efficiency and productivity.展开更多
To address the insufficient integration of performance evaluation and contextual analysis in traditional architectural design,this paper proposes a design workflow that combines data-driven and performance-driven appr...To address the insufficient integration of performance evaluation and contextual analysis in traditional architectural design,this paper proposes a design workflow that combines data-driven and performance-driven approaches,establishing a comprehensive operational pathway from typology selection and design generation to performance assessment.Using Yanshen Ancient Town,a cold region,as the study area,the research evaluates 18 traditional courtyard types and 8 brick kiln courtyard types.Benchmark models are selected based on the combined performance of PET(Physiological Equivalent Temperature)and MRT(Mean Radiant Temperature)indices.Subsequently,multiple performance indicators,including indoor and outdoor thermal comfort,indoor illuminance,and building energy consumption,are integrated into the analysis.Using a genetic algorithm,Pareto optimal solutions that meet performance requirements are iteratively optimized and filtered.Based on the learning rates and various evaluation indicators,XGBoost is ultimately selected to classify and predict the overall building performance.Results indicate that the model achieves an average prediction accuracy of 83.6%.Additionally,SHAP analysis of the independent variables in the algorithm reveals distinct influencing trends under different performance labels.The workflow demonstrates the feasibility of incorporating performance prediction in the early design stage of village courtyards,significantly enhancing the efficiency of feedback and follow-up between design decision-making and performance evaluation.展开更多
We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpr...We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.展开更多
For control systems with unknown model parameters,this paper proposes a data-driven iterative learning method for fault estimation.First,input and output data from the system under fault-free conditions are collected....For control systems with unknown model parameters,this paper proposes a data-driven iterative learning method for fault estimation.First,input and output data from the system under fault-free conditions are collected.By applying orthogonal triangular decomposition and singular value decomposition,a data-driven realization of the system's kernel representation is derived,based on this representation,a residual generator is constructed.Then,the actuator fault signal is estimated online by analyzing the system's dynamic residual,and an iterative learning algorithm is introduced to continuously optimize the residual-based performance function,thereby enhancing estimation accuracy.The proposed method achieves actuator fault estimation without requiring knowledge of model parameters,eliminating the time-consuming system modeling process,and allowing operators to focus on system optimization and decision-making.Compared with existing fault estimation methods,the proposed method demonstrates superior transient performance,steady-state performance,and real-time capability,reduces the need for manual intervention and lowers operational complexity.Finally,experimental results on a mobile robot verify the effectiveness and advantages of the method.展开更多
Permanent magnet synchronous motor(PMSM)is widely used in alternating current servo systems as it provides high eficiency,high power density,and a wide speed regulation range.The servo system is placing higher demands...Permanent magnet synchronous motor(PMSM)is widely used in alternating current servo systems as it provides high eficiency,high power density,and a wide speed regulation range.The servo system is placing higher demands on its control performance.The model predictive control(MPC)algorithm is emerging as a potential high-performance motor control algorithm due to its capability of handling multiple-input and multipleoutput variables and imposed constraints.For the MPC used in the PMSM control process,there is a nonlinear disturbance caused by the change of electromagnetic parameters or load disturbance that may lead to a mismatch between the nominal model and the controlled object,which causes the prediction error and thus affects the dynamic stability of the control system.This paper proposes a data-driven MPC strategy in which the historical data in an appropriate range are utilized to eliminate the impact of parameter mismatch and further improve the control performance.The stability of the proposed algorithm is proved as the simulation demonstrates the feasibility.Compared with the classical MPC strategy,the superiority of the algorithm has also been verified.展开更多
To tackle the difficulties of the point prediction in quantifying the reliability of landslide displacement prediction,a data-driven combination-interval prediction method(CIPM)based on copula and variational-mode-dec...To tackle the difficulties of the point prediction in quantifying the reliability of landslide displacement prediction,a data-driven combination-interval prediction method(CIPM)based on copula and variational-mode-decomposition associated with kernel-based-extreme-learningmachine optimized by the whale optimization algorithm(VMD-WOA-KELM)is proposed in this paper.Firstly,the displacement is decomposed by VMD to three IMF components and a residual component of different fluctuation characteristics.The key impact factors of each IMF component are selected according to Copula model,and the corresponding WOA-KELM is established to conduct point prediction.Subsequently,the parametric method(PM)and non-parametric method(NPM)are used to estimate the prediction error probability density distribution(PDF)of each component,whose prediction interval(PI)under the 95%confidence level is also obtained.By means of the differential evolution algorithm(DE),a weighted combination model based on the PIs is built to construct the combination-interval(CI).Finally,the CIs of each component are added to generate the total PI.A comparative case study shows that the CIPM performs better in constructing landslide displacement PI with high performance.展开更多
We propose a novel workflow for fast forward modeling of well logs in axially symmetric 2D models of the nearwellbore environment.The approach integrates the finite element method with deep residual neural networks to...We propose a novel workflow for fast forward modeling of well logs in axially symmetric 2D models of the nearwellbore environment.The approach integrates the finite element method with deep residual neural networks to achieve exceptional computational efficiency and accuracy.The workflow is demonstrated through the modeling of wireline electromagnetic propagation resistivity logs,where the measured responses exhibit a highly nonlinear relationship with formation properties.The motivation for this research is the need for advanced modeling al-gorithms that are fast enough for use in modern quantitative interpretation tools,where thousands of simulations may be required in iterative inversion processes.The proposed algorithm achieves a remarkable enhancement in performance,being up to 3000 times faster than the finite element method alone when utilizing a GPU.While still ensuring high accuracy,this makes it well-suited for practical applications when reliable payzone assessment is needed in complex environmental scenarios.Furthermore,the algorithm’s efficiency positions it as a promising tool for stochastic Bayesian inversion,facilitating reliable uncertainty quantification in subsurface property estimation.展开更多
To address the issue of instability or even imbalance in the orientation and attitude control of quadrotor unmanned aerial vehicles(QUAVs)under random disturbances,this paper proposes a distributed antidisturbance dat...To address the issue of instability or even imbalance in the orientation and attitude control of quadrotor unmanned aerial vehicles(QUAVs)under random disturbances,this paper proposes a distributed antidisturbance data-driven event-triggered fusion control method,which achieves efficient fault diagnosis while suppressing random disturbances and mitigating communication conflicts within the QUAV swarm.First,the impact of random disturbances on the UAV swarm is analyzed,and a model for orientation and attitude control of QUAVs under stochastic perturbations is established,with the disturbance gain threshold determined.Second,a fault diagnosis system based on a high-gain observer is designed,constructing a fault gain criterion by integrating orientation and attitude information from QUAVs.Subsequently,a model-free dynamic linearization-based data modeling(MFDLDM)framework is developed using model-free adaptive control,which efficiently fits the nonlinear control model of the QUAV swarm while reducing temporal constraints on control data.On this basis,this paper constructs a distributed data-driven event-triggered controller based on the staggered communication mechanism,which consists of an equivalent QUAV controller and an event-triggered controller,and is able to reduce the communication conflicts while suppressing the influence of random interference.Finally,by incorporating random disturbances into the controller,comparative experiments and physical validations are conducted on the QUAV platforms,fully demonstrating the strong adaptability and robustness of the proposed distributed event-triggered fault-tolerant control system.展开更多
In this paper,we consider the maximal positive definite solution of the nonlinear matrix equation.By using the idea of Algorithm 2.1 in ZHANG(2013),a new inversion-free method with a stepsize parameter is proposed to ...In this paper,we consider the maximal positive definite solution of the nonlinear matrix equation.By using the idea of Algorithm 2.1 in ZHANG(2013),a new inversion-free method with a stepsize parameter is proposed to obtain the maximal positive definite solution of nonlinear matrix equation X+A^(*)X|^(-α)A=Q with the case 0<α≤1.Based on this method,a new iterative algorithm is developed,and its convergence proof is given.Finally,two numerical examples are provided to show the effectiveness of the proposed method.展开更多
In this paper,a novel method for investigating the particle-crushing behavior of breeding particles in a fusion blanket is proposed.The fractal theory and Weibull distribution are combined to establish a theoretical m...In this paper,a novel method for investigating the particle-crushing behavior of breeding particles in a fusion blanket is proposed.The fractal theory and Weibull distribution are combined to establish a theoretical model,and its validity was verified using a simple impact test.A crushable discrete element method(DEM)framework is built based on the previously established theoretical model.The tensile strength,which considers the fractal theory,size effect,and Weibull variation,was assigned to each generated particle.The assigned strength is then used for crush detection by comparing it with its maximum tensile stress.Mass conservation is ensured by inserting a series of sub-particles whose total mass was equal to the quality loss.Based on the crushable DEM framework,a numerical simulation of the crushing behavior of a pebble bed with hollow cylindrical geometry under a uniaxial compression test was performed.The results of this investigation showed that the particle withstands the external load by contact and sliding at the beginning of the compression process,and the results confirmed that crushing can be considered an important method of resisting the increasing external load.A relatively regular particle arrangement aids in resisting the load and reduces the occurrence of particle crushing.However,a limit exists to the promotion of resistance.When the strain increases beyond this limit,the distribution of the crushing position tends to be isotropic over the entire pebble bed.The theoretical model and crushable DEM framework provide a new method for exploring the pebble bed in a fusion reactor,considering particle crushing.展开更多
Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision...Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.展开更多
The application of nitrogen fertilizers in agricultural fields can lead to the release of nitrogen-containing gases(NCGs),such as NO_(x),NH_(3) and N_(2)O,which can significantly impact regional atmospheric environmen...The application of nitrogen fertilizers in agricultural fields can lead to the release of nitrogen-containing gases(NCGs),such as NO_(x),NH_(3) and N_(2)O,which can significantly impact regional atmospheric environment and con-tribute to global climate change.However,there remain considerable research gaps in the accurate measurement of NCGs emissions from agricultural fields,hindering the development of effective emission reduction strategies.We improved an open-top dynamic chambers(OTDCs)system and evaluated the performance by comparing the measured and given fluxes of the NCGs.The results showed that the measured fluxes of NO,N_(2)O and NH_(3)were 1%,2%and 7%lower than the given fluxes,respectively.For the determination of NH_(3) concentration,we employed a stripping coil-ion chromatograph(SC-IC)analytical technique,which demonstrated an absorption efficiency for atmospheric NH_(3) exceeding 96.1%across sampling durations of 6 to 60 min.In the summer maize season,we utilized the OTDCs system to measure the exchange fluxes of NO,NH_(3),and N_(2)O from the soil in the North China Plain.Substantial emissions of NO,NH_(3) and N_(2)O were recorded following fertilization,with peaks of 107,309,1239 ng N/(m^(2)·s),respectively.Notably,significant NCGs emissions were observed following sus-tained heavy rainfall one month after fertilization,particularly with NH_(3) peak being 4.5 times higher than that observed immediately after fertilization.Our results demonstrate that the OTDCs system accurately reflects the emission characteristics of soil NCGs and meets the requirements for long-term and continuous flux observation.展开更多
Marine thin plates are susceptible to welding deformation owing to their low structural stiffness.Therefore,the efficient and accurate prediction of welding deformation is essential for improving welding quality.The t...Marine thin plates are susceptible to welding deformation owing to their low structural stiffness.Therefore,the efficient and accurate prediction of welding deformation is essential for improving welding quality.The traditional thermal elastic-plastic finite element method(TEP-FEM)can accurately predict welding deformation.However,its efficiency is low because of the complex nonlinear transient computation,making it difficult to meet the needs of rapid engineering evaluation.To address this challenge,this study proposes an efficient prediction method for welding deformation in marine thin plate butt welds.This method is based on the coupled temperature gradient-thermal strain method(TG-TSM)that integrates inherent strain theory with a shell element finite element model.The proposed method first extracts the distribution pattern and characteristic value of welding-induced inherent strain through TEP-FEM analysis.This strain is then converted into the equivalent thermal load applied to the shell element model for rapid computation.The proposed method-particularly,the gradual temperature gradient-thermal strain method(GTG-TSM)-achieved improved computational efficiency and consistent precision.Furthermore,the proposed method required much less computation time than the traditional TEP-FEM.Thus,this study lays the foundation for future prediction of welding deformation in more complex marine thin plates.展开更多
At present,there is currently a lack of unified standard methods for the determination of antimony content in groundwater in China.The precision and trueness of related detection technologies have not yet been systema...At present,there is currently a lack of unified standard methods for the determination of antimony content in groundwater in China.The precision and trueness of related detection technologies have not yet been systematically and quantitatively evaluated,which limits the effective implementation of environmental monitoring.In response to this key technical gap,this study aimed to establish a standardized method for determining antimony in groundwater using Hydride Generation–Atomic Fluorescence Spectrometry(HG-AFS).Ten laboratories participated in inter-laboratory collaborative tests,and the statistical analysis of the test data was carried out in strict accordance with the technical specifications of GB/T 6379.2—2004 and GB/T 6379.4—2006.The consistency and outliers of the data were tested by Mandel's h and k statistics,the Grubbs test and the Cochran test,and the outliers were removed to optimize the data,thereby significantly improving the reliability and accuracy.Based on the optimized data,parameters such as the repeatability limit(r),reproducibility limit(R),and method bias value(δ)were determined,and the trueness of the method was statistically evaluated.At the same time,precision-function relationships were established,and all results met the requirements.The results show that the lower the antimony content,the lower the repeatability limit(r)and reproducibility limit(R),indicating that the measurement error mainly originates from the detection limit of the method and instrument sensitivity.Therefore,improving the instrument sensitivity and reducing the detection limit are the keys to controlling the analytical error and improving precision.This study provides reliable data support and a solid technical foundation for the establishment and evaluation of standardized methods for the determination of antimony content in groundwater.展开更多
Platelets,one of the most significant materials in treating leukemia,have a limited shelf life of approximately five days.Because platelets cannot be manufactured and can only be centrifuged from whole or donated bloo...Platelets,one of the most significant materials in treating leukemia,have a limited shelf life of approximately five days.Because platelets cannot be manufactured and can only be centrifuged from whole or donated blood directly,an accurate ordering policy is necessary for the efficient use of this limited blood resource.Given this motivation,the present study examines an ordering policy for platelets to minimize the expected shortage and overage.Rather than using the two-step model-driven method that first fits a demand distribution and then optimizes the order quantity,we solve the issue using an integrated datadriven method.Specifically,the data-driven method works directly with demand data and does not rely on the assumption of demand distribution.Consequently,we derive theoretical insights into the optimal solutions.Through a comparative analysis,we find that the data-driven method has a mean anchoring effect,and the amounts of shortage and overage reduced by this method are greater than those reduced by the model-driven method.Finally,we present an extended model with the service level requirement and conclude that the order decided by the data-driven method can precisely satisfy the service level requirement;however,the order decided by the model-driven method may be either higher or lower than the service level requirement and can lead to a higher cost.展开更多
Degradation prediction of proton exchange membrane fuel cell(PEMFC)stack is of great significance for improving the rest useful life.In this study,a PEMFC system including a stack of 300 cells and subsystems has been ...Degradation prediction of proton exchange membrane fuel cell(PEMFC)stack is of great significance for improving the rest useful life.In this study,a PEMFC system including a stack of 300 cells and subsystems has been tested under semi-steady operations for about 931 h.Then,two different models are respectively established based on semi-empirical method and data-driven method to investigate the degradation of stack performance.It is found that the root mean square error(RMSE)of the semi-empirical model in predicting the stack voltage is around 1.0 V,while the predicted voltage has no local dynamic characteristics,which can only reflect the overall degradation trend of stack performance.The RMSE of short-term voltage degradation predicted by the DDM can be less than 1.0 V,and the predicted voltage has accurate local variation characteristics.However,for the long-term prediction,the error will accumulate with the iterations and the deviation of the predicted voltage begins to fluctuate gradually,and the RMSE for the long-term predictions can increase to 1.63 V.Based on the above characteristics of the two models,a hybrid prediction model is further developed.The prediction results of the semi-empirical model are used to modify the input of the data-driven model,which can effectively improve the oscillation of prediction results of the data-driven model during the long-term degradation.It is found that the hybrid model has good error distribution(RSEM=0.8144 V,R2=0.8258)and local performance dynamic characteristics which can be used to predict the process of long-term stack performance degradation.展开更多
基金funding from the Department of Industrial Engineering,University of Naples FedericoⅡ,Italy。
文摘This study presents a data-driven approach to predict tailplane aerodynamics in icing conditions,supporting the ice-tolerant design of aircraft horizontal stabilizers.The core of this work is a low-cost predictive model for analyzing icing effects on swept tailplanes.The method relies on a multi-fidelity data gathering campaign,enabling seamless integration into multidisciplinary aircraft design workflows.A dataset of iced airfoil shapes was generated using 2D inviscid methods across various flight conditions.High-fidelity CFD simulations were conducted on both clean and iced geometries,forming a multidimensional aerodynamic database.This 2D database feeds a nonlinear vortex lattice method to estimate 3D aerodynamic characteristics,following a'quasi-3D'approach.The resulting reduced-order model delivers fast aerodynamic performance estimates of iced tailplanes.To demonstrate its effectiveness,optimal ice-tolerant tailplane designs were selected from a range of feasible shapes based on a reference transport aircraft.The analysis validates the model's reliability,accuracy,and limitations concerning 3D ice shapes and aerodynamic characteristics.Most notably,the model offers near-zero computational cost compared to high-fidelity simulations,making it a valuable tool for efficient aircraft design.
文摘A corrosion defect is recognized as one of the most severe phenomena for high-pressure pipelines,especially those served for a long time.Finite-element method and empirical formulas are thereby used for the strength prediction of such pipes with corrosion.However,it is time-consuming for finite-element method and there is a limited application range by using empirical formulas.In order to improve the prediction of strength,this paper investigates the burst pressure of line pipelines with a single corrosion defect subjected to internal pressure based on data-driven methods.Three supervised ML(machine learning)algorithms,including the ANN(artificial neural network),the SVM(support vector machine)and the LR(linear regression),are deployed to train models based on experimental data.Data analysis is first conducted to determine proper pipe features for training.Hyperparameter tuning to control the learning process is then performed to fit the best strength models for corroded pipelines.Among all the proposed data-driven models,the ANN model with three neural layers has the highest training accuracy,but also presents the largest variance.The SVM model provides both high training accuracy and high validation accuracy.The LR model has the best performance in terms of generalization ability.These models can be served as surrogate models by transfer learning with new coming data in future research,facilitating a sustainable and intelligent decision-making of corroded pipelines.
基金Project(2017G006-N)supported by the Project of Science and Technology Research and Development Program of China Railway Corporation。
文摘Cable-stayed bridges have been widely used in high-speed railway infrastructure.The accurate determination of cable’s representative temperatures is vital during the intricate processes of design,construction,and maintenance of cable-stayed bridges.However,the representative temperatures of stayed cables are not specified in the existing design codes.To address this issue,this study investigates the distribution of the cable temperature and determinates its representative temperature.First,an experimental investigation,spanning over a period of one year,was carried out near the bridge site to obtain the temperature data.According to the statistical analysis of the measured data,it reveals that the temperature distribution is generally uniform along the cable cross-section without significant temperature gradient.Then,based on the limited data,the Monte Carlo,the gradient boosted regression trees(GBRT),and univariate linear regression(ULR)methods are employed to predict the cable’s representative temperature throughout the service life.These methods effectively overcome the limitations of insufficient monitoring data and accurately predict the representative temperature of the cables.However,each method has its own advantages and limitations in terms of applicability and accuracy.A comprehensive evaluation of the performance of these methods is conducted,and practical recommendations are provided for their application.The proposed methods and representative temperatures provide a good basis for the operation and maintenance of in-service long-span cable-stayed bridges.
文摘When assessing seismic liquefaction potential with data-driven models,addressing the uncertainties of establishing models,interpreting cone penetration tests(CPT)data and decision threshold is crucial for avoiding biased data selection,ameliorating overconfident models,and being flexible to varying practical objectives,especially when the training and testing data are not identically distributed.A workflow characterized by leveraging Bayesian methodology was proposed to address these issues.Employing a Multi-Layer Perceptron(MLP)as the foundational model,this approach was benchmarked against empirical methods and advanced algorithms for its efficacy in simplicity,accuracy,and resistance to overfitting.The analysis revealed that,while MLP models optimized via maximum a posteriori algorithm suffices for straightforward scenarios,Bayesian neural networks showed great potential for preventing overfitting.Additionally,integrating decision thresholds through various evaluative principles offers insights for challenging decisions.Two case studies demonstrate the framework's capacity for nuanced interpretation of in situ data,employing a model committee for a detailed evaluation of liquefaction potential via Monte Carlo simulations and basic statistics.Overall,the proposed step-by-step workflow for analyzing seismic liquefaction incorporates multifold testing and real-world data validation,showing improved robustness against overfitting and greater versatility in addressing practical challenges.This research contributes to the seismic liquefaction assessment field by providing a structured,adaptable methodology for accurate and reliable analysis.
基金funded by the Key Research and Development Program of Shaanxi,China(No.2024GX-YBXM-503)the National Natural Science Foundation of China(No.51974254)。
文摘Hydraulic fracturing technology has achieved remarkable results in improving the production of tight gas reservoirs,but its effectiveness is under the joint action of multiple factors of complexity.Traditional analysis methods have limitations in dealing with these complex and interrelated factors,and it is difficult to fully reveal the actual contribution of each factor to the production.Machine learning-based methods explore the complex mapping relationships between large amounts of data to provide datadriven insights into the key factors driving production.In this study,a data-driven PCA-RF-VIM(Principal Component Analysis-Random Forest-Variable Importance Measures)approach of analyzing the importance of features is proposed to identify the key factors driving post-fracturing production.Four types of parameters,including log parameters,geological and reservoir physical parameters,hydraulic fracturing design parameters,and reservoir stimulation parameters,were inputted into the PCA-RF-VIM model.The model was trained using 6-fold cross-validation and grid search,and the relative importance ranking of each factor was finally obtained.In order to verify the validity of the PCA-RF-VIM model,a consolidation model that uses three other independent data-driven methods(Pearson correlation coefficient,RF feature significance analysis method,and XGboost feature significance analysis method)are applied to compare with the PCA-RF-VIM model.A comparison the two models shows that they contain almost the same parameters in the top ten,with only minor differences in one parameter.In combination with the reservoir characteristics,the reasonableness of the PCA-RF-VIM model is verified,and the importance ranking of the parameters by this method is more consistent with the reservoir characteristics of the study area.Ultimately,the ten parameters are selected as the controlling factors that have the potential to influence post-fracturing gas production,as the combined importance of these top ten parameters is 91.95%on driving natural gas production.Analyzing and obtaining these ten controlling factors provides engineers with a new insight into the reservoir selection for fracturing stimulation and fracturing parameter optimization to improve fracturing efficiency and productivity.
基金funded by the Postgraduate Research&Practice Innovation Program of Jiangsu Province(KYCX19_0090).
文摘To address the insufficient integration of performance evaluation and contextual analysis in traditional architectural design,this paper proposes a design workflow that combines data-driven and performance-driven approaches,establishing a comprehensive operational pathway from typology selection and design generation to performance assessment.Using Yanshen Ancient Town,a cold region,as the study area,the research evaluates 18 traditional courtyard types and 8 brick kiln courtyard types.Benchmark models are selected based on the combined performance of PET(Physiological Equivalent Temperature)and MRT(Mean Radiant Temperature)indices.Subsequently,multiple performance indicators,including indoor and outdoor thermal comfort,indoor illuminance,and building energy consumption,are integrated into the analysis.Using a genetic algorithm,Pareto optimal solutions that meet performance requirements are iteratively optimized and filtered.Based on the learning rates and various evaluation indicators,XGBoost is ultimately selected to classify and predict the overall building performance.Results indicate that the model achieves an average prediction accuracy of 83.6%.Additionally,SHAP analysis of the independent variables in the algorithm reveals distinct influencing trends under different performance labels.The workflow demonstrates the feasibility of incorporating performance prediction in the early design stage of village courtyards,significantly enhancing the efficiency of feedback and follow-up between design decision-making and performance evaluation.
基金supported by National Key Research and Development Program (2019YFA0708301)National Natural Science Foundation of China (51974337)+2 种基金the Strategic Cooperation Projects of CNPC and CUPB (ZLZX2020-03)Science and Technology Innovation Fund of CNPC (2021DQ02-0403)Open Fund of Petroleum Exploration and Development Research Institute of CNPC (2022-KFKT-09)
文摘We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.
基金Supported by Shandong Provincial Taishan Scholar Program(Grant No.tsqn202312133)Shandong Provincial Natural Science Foundation(Grant Nos.ZR2022YQ61,ZR2023ZD32)+1 种基金Shandong Provincial Natural Science Foundation(Grant No.ZR2023ZD32)National Natural Science Foundation of China(Grant Nos.61772551 and 62111530052)。
文摘For control systems with unknown model parameters,this paper proposes a data-driven iterative learning method for fault estimation.First,input and output data from the system under fault-free conditions are collected.By applying orthogonal triangular decomposition and singular value decomposition,a data-driven realization of the system's kernel representation is derived,based on this representation,a residual generator is constructed.Then,the actuator fault signal is estimated online by analyzing the system's dynamic residual,and an iterative learning algorithm is introduced to continuously optimize the residual-based performance function,thereby enhancing estimation accuracy.The proposed method achieves actuator fault estimation without requiring knowledge of model parameters,eliminating the time-consuming system modeling process,and allowing operators to focus on system optimization and decision-making.Compared with existing fault estimation methods,the proposed method demonstrates superior transient performance,steady-state performance,and real-time capability,reduces the need for manual intervention and lowers operational complexity.Finally,experimental results on a mobile robot verify the effectiveness and advantages of the method.
文摘Permanent magnet synchronous motor(PMSM)is widely used in alternating current servo systems as it provides high eficiency,high power density,and a wide speed regulation range.The servo system is placing higher demands on its control performance.The model predictive control(MPC)algorithm is emerging as a potential high-performance motor control algorithm due to its capability of handling multiple-input and multipleoutput variables and imposed constraints.For the MPC used in the PMSM control process,there is a nonlinear disturbance caused by the change of electromagnetic parameters or load disturbance that may lead to a mismatch between the nominal model and the controlled object,which causes the prediction error and thus affects the dynamic stability of the control system.This paper proposes a data-driven MPC strategy in which the historical data in an appropriate range are utilized to eliminate the impact of parameter mismatch and further improve the control performance.The stability of the proposed algorithm is proved as the simulation demonstrates the feasibility.Compared with the classical MPC strategy,the superiority of the algorithm has also been verified.
基金financially supported by the National Natural Science Foundation of China(Nos.42277149,41502299,41372306)the Research Planning of Sichuan Education Department,China(No.16ZB0105)+3 种基金the State Key Laboratory of Geohazard Prevention and Geoenvironment Protection Independent Research Project(Nos.SKLGP2016Z007,SKLGP2018Z017,SKLGP2020Z009)Chengdu University of Technology Young and Middle Aged Backbone Program(No.KYGG201720)Sichuan Provincial Science and Technology Department Program(No.19YYJC2087)China Scholarship Council。
文摘To tackle the difficulties of the point prediction in quantifying the reliability of landslide displacement prediction,a data-driven combination-interval prediction method(CIPM)based on copula and variational-mode-decomposition associated with kernel-based-extreme-learningmachine optimized by the whale optimization algorithm(VMD-WOA-KELM)is proposed in this paper.Firstly,the displacement is decomposed by VMD to three IMF components and a residual component of different fluctuation characteristics.The key impact factors of each IMF component are selected according to Copula model,and the corresponding WOA-KELM is established to conduct point prediction.Subsequently,the parametric method(PM)and non-parametric method(NPM)are used to estimate the prediction error probability density distribution(PDF)of each component,whose prediction interval(PI)under the 95%confidence level is also obtained.By means of the differential evolution algorithm(DE),a weighted combination model based on the PIs is built to construct the combination-interval(CI).Finally,the CIs of each component are added to generate the total PI.A comparative case study shows that the CIPM performs better in constructing landslide displacement PI with high performance.
基金financially supported by the Russian federal research project No.FWZZ-2022-0026“Innovative aspects of electro-dynamics in problems of exploration and oilfield geophysics”.
文摘We propose a novel workflow for fast forward modeling of well logs in axially symmetric 2D models of the nearwellbore environment.The approach integrates the finite element method with deep residual neural networks to achieve exceptional computational efficiency and accuracy.The workflow is demonstrated through the modeling of wireline electromagnetic propagation resistivity logs,where the measured responses exhibit a highly nonlinear relationship with formation properties.The motivation for this research is the need for advanced modeling al-gorithms that are fast enough for use in modern quantitative interpretation tools,where thousands of simulations may be required in iterative inversion processes.The proposed algorithm achieves a remarkable enhancement in performance,being up to 3000 times faster than the finite element method alone when utilizing a GPU.While still ensuring high accuracy,this makes it well-suited for practical applications when reliable payzone assessment is needed in complex environmental scenarios.Furthermore,the algorithm’s efficiency positions it as a promising tool for stochastic Bayesian inversion,facilitating reliable uncertainty quantification in subsurface property estimation.
基金supported in part by the National Natural Science Foundation of China,Grant/Award Number:62003267the Key Research and Development Program of Shaanxi Province,Grant/Award Number:2023-GHZD-33Open Project of the State Key Laboratory of Intelligent Game,Grant/Award Number:ZBKF-23-05。
文摘To address the issue of instability or even imbalance in the orientation and attitude control of quadrotor unmanned aerial vehicles(QUAVs)under random disturbances,this paper proposes a distributed antidisturbance data-driven event-triggered fusion control method,which achieves efficient fault diagnosis while suppressing random disturbances and mitigating communication conflicts within the QUAV swarm.First,the impact of random disturbances on the UAV swarm is analyzed,and a model for orientation and attitude control of QUAVs under stochastic perturbations is established,with the disturbance gain threshold determined.Second,a fault diagnosis system based on a high-gain observer is designed,constructing a fault gain criterion by integrating orientation and attitude information from QUAVs.Subsequently,a model-free dynamic linearization-based data modeling(MFDLDM)framework is developed using model-free adaptive control,which efficiently fits the nonlinear control model of the QUAV swarm while reducing temporal constraints on control data.On this basis,this paper constructs a distributed data-driven event-triggered controller based on the staggered communication mechanism,which consists of an equivalent QUAV controller and an event-triggered controller,and is able to reduce the communication conflicts while suppressing the influence of random interference.Finally,by incorporating random disturbances into the controller,comparative experiments and physical validations are conducted on the QUAV platforms,fully demonstrating the strong adaptability and robustness of the proposed distributed event-triggered fault-tolerant control system.
基金Supported in part by Natural Science Foundation of Guangxi(2023GXNSFAA026246)in part by the Central Government's Guide to Local Science and Technology Development Fund(GuikeZY23055044)in part by the National Natural Science Foundation of China(62363003)。
文摘In this paper,we consider the maximal positive definite solution of the nonlinear matrix equation.By using the idea of Algorithm 2.1 in ZHANG(2013),a new inversion-free method with a stepsize parameter is proposed to obtain the maximal positive definite solution of nonlinear matrix equation X+A^(*)X|^(-α)A=Q with the case 0<α≤1.Based on this method,a new iterative algorithm is developed,and its convergence proof is given.Finally,two numerical examples are provided to show the effectiveness of the proposed method.
基金supported by Anhui Provincial Natural Science Foundation(2408085QA030)Natural Science Research Project of Anhui Educational Committee,China(2022AH050825)+3 种基金Medical Special Cultivation Project of Anhui University of Science and Technology(YZ2023H2C008)the Excellent Research and Innovation Team of Anhui Province,China(2022AH010052)the Scientific Research Foundation for High-level Talents of Anhui University of Science and Technology,China(2021yjrc51)Collaborative Innovation Program of Hefei Science Center,CAS,China(2019HSC-CIP006).
文摘In this paper,a novel method for investigating the particle-crushing behavior of breeding particles in a fusion blanket is proposed.The fractal theory and Weibull distribution are combined to establish a theoretical model,and its validity was verified using a simple impact test.A crushable discrete element method(DEM)framework is built based on the previously established theoretical model.The tensile strength,which considers the fractal theory,size effect,and Weibull variation,was assigned to each generated particle.The assigned strength is then used for crush detection by comparing it with its maximum tensile stress.Mass conservation is ensured by inserting a series of sub-particles whose total mass was equal to the quality loss.Based on the crushable DEM framework,a numerical simulation of the crushing behavior of a pebble bed with hollow cylindrical geometry under a uniaxial compression test was performed.The results of this investigation showed that the particle withstands the external load by contact and sliding at the beginning of the compression process,and the results confirmed that crushing can be considered an important method of resisting the increasing external load.A relatively regular particle arrangement aids in resisting the load and reduces the occurrence of particle crushing.However,a limit exists to the promotion of resistance.When the strain increases beyond this limit,the distribution of the crushing position tends to be isotropic over the entire pebble bed.The theoretical model and crushable DEM framework provide a new method for exploring the pebble bed in a fusion reactor,considering particle crushing.
基金funded by the Beijing Engineering Research Center of Electric Rail Transportation.
文摘Effective partitioning is crucial for enabling parallel restoration of power systems after blackouts.This paper proposes a novel partitioning method based on deep reinforcement learning.First,the partitioning decision process is formulated as a Markov decision process(MDP)model to maximize the modularity.Corresponding key partitioning constraints on parallel restoration are considered.Second,based on the partitioning objective and constraints,the reward function of the partitioning MDP model is set by adopting a relative deviation normalization scheme to reduce mutual interference between the reward and penalty in the reward function.The soft bonus scaling mechanism is introduced to mitigate overestimation caused by abrupt jumps in the reward.Then,the deep Q network method is applied to solve the partitioning MDP model and generate partitioning schemes.Two experience replay buffers are employed to speed up the training process of the method.Finally,case studies on the IEEE 39-bus test system demonstrate that the proposed method can generate a high-modularity partitioning result that meets all key partitioning constraints,thereby improving the parallelism and reliability of the restoration process.Moreover,simulation results demonstrate that an appropriate discount factor is crucial for ensuring both the convergence speed and the stability of the partitioning training.
基金supported by the National Key Research and Develop-ment Program(No.2022YFC3701103)the National Natural Science Foundation of China(Nos.42130714 and 41931287).
文摘The application of nitrogen fertilizers in agricultural fields can lead to the release of nitrogen-containing gases(NCGs),such as NO_(x),NH_(3) and N_(2)O,which can significantly impact regional atmospheric environment and con-tribute to global climate change.However,there remain considerable research gaps in the accurate measurement of NCGs emissions from agricultural fields,hindering the development of effective emission reduction strategies.We improved an open-top dynamic chambers(OTDCs)system and evaluated the performance by comparing the measured and given fluxes of the NCGs.The results showed that the measured fluxes of NO,N_(2)O and NH_(3)were 1%,2%and 7%lower than the given fluxes,respectively.For the determination of NH_(3) concentration,we employed a stripping coil-ion chromatograph(SC-IC)analytical technique,which demonstrated an absorption efficiency for atmospheric NH_(3) exceeding 96.1%across sampling durations of 6 to 60 min.In the summer maize season,we utilized the OTDCs system to measure the exchange fluxes of NO,NH_(3),and N_(2)O from the soil in the North China Plain.Substantial emissions of NO,NH_(3) and N_(2)O were recorded following fertilization,with peaks of 107,309,1239 ng N/(m^(2)·s),respectively.Notably,significant NCGs emissions were observed following sus-tained heavy rainfall one month after fertilization,particularly with NH_(3) peak being 4.5 times higher than that observed immediately after fertilization.Our results demonstrate that the OTDCs system accurately reflects the emission characteristics of soil NCGs and meets the requirements for long-term and continuous flux observation.
基金Supported by the National Natural Science Foundation of China under Grant No.51975138the High-Tech Ship Scientific Research Project from the Ministry of Industry and Information Technology under Grant No.CJ05N20the National Defense Basic Research Project under Grant No.JCKY2023604C006.
文摘Marine thin plates are susceptible to welding deformation owing to their low structural stiffness.Therefore,the efficient and accurate prediction of welding deformation is essential for improving welding quality.The traditional thermal elastic-plastic finite element method(TEP-FEM)can accurately predict welding deformation.However,its efficiency is low because of the complex nonlinear transient computation,making it difficult to meet the needs of rapid engineering evaluation.To address this challenge,this study proposes an efficient prediction method for welding deformation in marine thin plate butt welds.This method is based on the coupled temperature gradient-thermal strain method(TG-TSM)that integrates inherent strain theory with a shell element finite element model.The proposed method first extracts the distribution pattern and characteristic value of welding-induced inherent strain through TEP-FEM analysis.This strain is then converted into the equivalent thermal load applied to the shell element model for rapid computation.The proposed method-particularly,the gradual temperature gradient-thermal strain method(GTG-TSM)-achieved improved computational efficiency and consistent precision.Furthermore,the proposed method required much less computation time than the traditional TEP-FEM.Thus,this study lays the foundation for future prediction of welding deformation in more complex marine thin plates.
基金supported by the National Natural Science Foundation of China(Project No.42307555).
文摘At present,there is currently a lack of unified standard methods for the determination of antimony content in groundwater in China.The precision and trueness of related detection technologies have not yet been systematically and quantitatively evaluated,which limits the effective implementation of environmental monitoring.In response to this key technical gap,this study aimed to establish a standardized method for determining antimony in groundwater using Hydride Generation–Atomic Fluorescence Spectrometry(HG-AFS).Ten laboratories participated in inter-laboratory collaborative tests,and the statistical analysis of the test data was carried out in strict accordance with the technical specifications of GB/T 6379.2—2004 and GB/T 6379.4—2006.The consistency and outliers of the data were tested by Mandel's h and k statistics,the Grubbs test and the Cochran test,and the outliers were removed to optimize the data,thereby significantly improving the reliability and accuracy.Based on the optimized data,parameters such as the repeatability limit(r),reproducibility limit(R),and method bias value(δ)were determined,and the trueness of the method was statistically evaluated.At the same time,precision-function relationships were established,and all results met the requirements.The results show that the lower the antimony content,the lower the repeatability limit(r)and reproducibility limit(R),indicating that the measurement error mainly originates from the detection limit of the method and instrument sensitivity.Therefore,improving the instrument sensitivity and reducing the detection limit are the keys to controlling the analytical error and improving precision.This study provides reliable data support and a solid technical foundation for the establishment and evaluation of standardized methods for the determination of antimony content in groundwater.
文摘Platelets,one of the most significant materials in treating leukemia,have a limited shelf life of approximately five days.Because platelets cannot be manufactured and can only be centrifuged from whole or donated blood directly,an accurate ordering policy is necessary for the efficient use of this limited blood resource.Given this motivation,the present study examines an ordering policy for platelets to minimize the expected shortage and overage.Rather than using the two-step model-driven method that first fits a demand distribution and then optimizes the order quantity,we solve the issue using an integrated datadriven method.Specifically,the data-driven method works directly with demand data and does not rely on the assumption of demand distribution.Consequently,we derive theoretical insights into the optimal solutions.Through a comparative analysis,we find that the data-driven method has a mean anchoring effect,and the amounts of shortage and overage reduced by this method are greater than those reduced by the model-driven method.Finally,we present an extended model with the service level requirement and conclude that the order decided by the data-driven method can precisely satisfy the service level requirement;however,the order decided by the model-driven method may be either higher or lower than the service level requirement and can lead to a higher cost.
基金supported by the National Key Research and Development Program of China(Grant No.SQ2021YFE011145)the Science and Technology Development Program of Jilin Province(Grant No.20200501010GX).
文摘Degradation prediction of proton exchange membrane fuel cell(PEMFC)stack is of great significance for improving the rest useful life.In this study,a PEMFC system including a stack of 300 cells and subsystems has been tested under semi-steady operations for about 931 h.Then,two different models are respectively established based on semi-empirical method and data-driven method to investigate the degradation of stack performance.It is found that the root mean square error(RMSE)of the semi-empirical model in predicting the stack voltage is around 1.0 V,while the predicted voltage has no local dynamic characteristics,which can only reflect the overall degradation trend of stack performance.The RMSE of short-term voltage degradation predicted by the DDM can be less than 1.0 V,and the predicted voltage has accurate local variation characteristics.However,for the long-term prediction,the error will accumulate with the iterations and the deviation of the predicted voltage begins to fluctuate gradually,and the RMSE for the long-term predictions can increase to 1.63 V.Based on the above characteristics of the two models,a hybrid prediction model is further developed.The prediction results of the semi-empirical model are used to modify the input of the data-driven model,which can effectively improve the oscillation of prediction results of the data-driven model during the long-term degradation.It is found that the hybrid model has good error distribution(RSEM=0.8144 V,R2=0.8258)and local performance dynamic characteristics which can be used to predict the process of long-term stack performance degradation.