Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,...Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,the robustness of solutions obtained by these methods faces great challenges when the complexity of the PV model increases.The unstable results will affect the reliable operation and maintenance strategies of PV systems.In response to this challenge,an improved rime optimization algorithm with enhanced exploration and exploitation,termed TERIME,is proposed for robust and accurate parameter identification for various PV models.Specifically,the differential evolution mutation operator is integrated in the exploration phase to enhance the population diversity.Meanwhile,a new exploitation strategy incorporating randomization and neighborhood strategies simultaneously is developed to maintain the balance of exploitation width and depth.The TERIME algorithm is applied to estimate the optimal parameters of the single diode model,double diode model,and triple diode model combined with the Lambert-W function for three PV cell and module types including RTC France,Photo Watt-PWP 201 and S75.According to the statistical analysis in 100 runs,the proposed algorithm achieves more accurate and robust parameter estimations than other techniques to various PV models in varying environmental conditions.All of our source codes are publicly available at https://github.com/dirge1/TERIME.展开更多
To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is pla...To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is placed on the critical components of material and heat balance.Through a thorough analysis of the interactions between various components and energy consumptions,six pivotal factors have been identified—raw material composition,steel type,steel temperature,slag temperature,recycling practices,and operational parameters.Utilizing a framework based on an equivalent energy consumption model,an integrated intelligent diagnostic model has been developed that encapsulates these factors,providing a comprehensive assessment tool for converter energy consumption.Employing the K-means clustering algorithm,historical operational data from the converter have been meticulously analyzed to determine baseline values for essential variables such as energy consumption and recovery rates.Building upon this data-driven foundation,an innovative online system for the intelligent diagnosis of converter energy consumption has been crafted and implemented,enhancing the precision and efficiency of energy management.Upon implementation with energy consumption data at a steel plant in 2023,the diagnostic analysis performed by the system exposed significant variations in energy usage across different converter units.The analysis revealed that the most significant factor influencing the variation in energy consumption for both furnaces was the steel grade,with contributions of−0.550 and 0.379.展开更多
With the rapid adoption of artificial intelligence(AI)in domains such as power,transportation,and finance,the number of machine learning and deep learning models has grown exponentially.However,challenges such as dela...With the rapid adoption of artificial intelligence(AI)in domains such as power,transportation,and finance,the number of machine learning and deep learning models has grown exponentially.However,challenges such as delayed retraining,inconsistent version management,insufficient drift monitoring,and limited data security still hinder efficient and reliable model operations.To address these issues,this paper proposes the Intelligent Model Lifecycle Management Algorithm(IMLMA).The algorithm employs a dual-trigger mechanism based on both data volume thresholds and time intervals to automate retraining,and applies Bayesian optimization for adaptive hyperparameter tuning to improve performance.A multi-metric replacement strategy,incorporating MSE,MAE,and R2,ensures that new models replace existing ones only when performance improvements are guaranteed.A versioning and traceability database supports comparison and visualization,while real-time monitoring with stability analysis enables early warnings of latency and drift.Finally,hash-based integrity checks secure both model files and datasets.Experimental validation in a power metering operation scenario demonstrates that IMLMA reduces model update delays,enhances predictive accuracy and stability,and maintains low latency under high concurrency.This work provides a practical,reusable,and scalable solution for intelligent model lifecycle management,with broad applicability to complex systems such as smart grids.展开更多
The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of...The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of geostatistical analysis,particularly in mineral exploration.The study combines GA and machine learning to optimise variogram parameters,including range,sill,and nugget,by minimising the root mean square error(RMSE)and maximising the coefficient of determination(R^(2)).The experimental variograms were computed and modelled using theoretical models,followed by optimisation via evolutionary algorithms.The method was applied to gravity data from the Ngoura-Batouri-Kette mining district in Eastern Cameroon,covering 141 data points.Sequential Gaussian Simulations(SGS)were employed for predictive mapping to validate simulated results against true values.Key findings show variograms with ranges between 24.71 km and 49.77 km,opti-mised RMSE and R^(2) values of 11.21 mGal^(2) and 0.969,respectively,after 42 generations of GA optimisation.Predictive mapping using SGS demonstrated that simulated values closely matched true values,with the simu-lated mean at 21.75 mGal compared to the true mean of 25.16 mGal,and variances of 465.70 mGal^(2) and 555.28 mGal^(2),respectively.The results confirmed spatial variability and anisotropies in the N170-N210 directions,consistent with prior studies.This work presents a novel integration of GA and machine learning for variogram modelling,offering an automated,efficient approach to parameter estimation.The methodology significantly enhances predictive geostatistical models,contributing to the advancement of mineral exploration and improving the precision and speed of decision-making in the petroleum and mining industries.展开更多
Vulnerability assessment is a systematic process to identify security gaps in the design and evaluation of physical protection systems.Adversarial path planning is a widely used method for identifying potential vulner...Vulnerability assessment is a systematic process to identify security gaps in the design and evaluation of physical protection systems.Adversarial path planning is a widely used method for identifying potential vulnerabilities and threats to the security and resilience of critical infrastructures.However,achieving efficient path optimization in complex large-scale three-dimensional(3D)scenes remains a significant challenge for vulnerability assessment.This paper introduces a novel A^(*)-algorithmic framework for 3D security modeling and vulnerability assessment.Within this framework,the 3D facility models were first developed in 3ds Max and then incorporated into Unity for A^(*)heuristic pathfinding.The A^(*)-heuristic pathfinding algorithm was implemented with a geometric probability model to refine the detection and distance fields and achieve a rational approximation of the cost to reach the goal.An admissible heuristic is ensured by incorporating the minimum probability of detection(P_(D)^(min))and diagonal distance to estimate the heuristic function.The 3D A^(*)heuristic search was demonstrated using a hypothetical laboratory facility,where a comparison was also carried out between the A^(*)and Dijkstra algorithms for optimal path identification.Comparative results indicate that the proposed A^(*)-heuristic algorithm effectively identifies the most vulnerable adversarial pathfinding with high efficiency.Finally,the paper discusses hidden phenomena and open issues in efficient 3D pathfinding for security applications.展开更多
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie...The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.展开更多
BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intr...BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intraoperative strategies.AIM To evaluate the predictive performance of machine learning(ML)algorithms for DCI by comparing three modeling approaches,identify factors influencing DCI,and develop a preoperative prediction model using ML algorithms to enhance colonoscopy quality and efficiency.METHODS This cross-sectional study enrolled 712 patients who underwent colonoscopy at a tertiary hospital between June 2020 and May 2021.Demographic data,past medical history,medication use,and psychological status were collected.The endoscopist assessed DCI using the visual analogue scale.After univariate screening,predictive models were developed using multivariable logistic regression,least absolute shrinkage and selection operator(LASSO)regression,and random forest(RF)algorithms.Model performance was evaluated based on discrimination,calibration,and decision curve analysis(DCA),and results were visualized using nomograms.RESULTS A total of 712 patients(53.8%male;mean age 54.5 years±12.9 years)were included.Logistic regression analysis identified constipation[odds ratio(OR)=2.254,95%confidence interval(CI):1.289-3.931],abdominal circumference(AC)(77.5–91.9 cm,OR=1.895,95%CI:1.065-3.350;AC≥92 cm,OR=1.271,95%CI:0.730-2.188),and anxiety(OR=1.071,95%CI:1.044-1.100)as predictive factors for DCI,validated by LASSO and RF methods.Model performance revealed training/validation sensitivities of 0.826/0.925,0.924/0.868,and 1.000/0.981;specificities of 0.602/0.511,0.510/0.562,and 0.977/0.526;and corresponding area under the receiver operating characteristic curves(AUCs)of 0.780(0.737-0.823)/0.726(0.654-0.799),0.754(0.710-0.798)/0.723(0.656-0.791),and 1.000(1.000-1.000)/0.754(0.688-0.820),respectively.DCA indicated optimal net benefit within probability thresholds of 0-0.9 and 0.05-0.37.The RF model demonstrated superior diagnostic accuracy,reflected by perfect training sensitivity(1.000)and highest validation AUC(0.754),outperforming other methods in clinical applicability.CONCLUSION The RF-based model exhibited superior predictive accuracy for DCI compared to multivariable logistic and LASSO regression models.This approach supports individualized preoperative optimization,enhancing colonoscopy quality through targeted risk stratification.展开更多
Cluster-basedmodels have numerous application scenarios in vehicular ad-hoc networks(VANETs)and can greatly help improve the communication performance of VANETs.However,the frequent movement of vehicles can often lead...Cluster-basedmodels have numerous application scenarios in vehicular ad-hoc networks(VANETs)and can greatly help improve the communication performance of VANETs.However,the frequent movement of vehicles can often lead to changes in the network topology,thereby reducing cluster stability in urban scenarios.To address this issue,we propose a clustering model based on the density peak clustering(DPC)method and sparrow search algorithm(SSA),named SDPC.First,the model constructs a fitness function based on the parameters obtained from the DPC method and deploys the SSA for iterative optimization to select cluster heads(CHs).Then,the vehicles that have not been selected as CHs are assigned to appropriate clusters by comprehensively considering the distance parameter and link-reliability parameter.Finally,cluster maintenance strategies are considered to tackle the changes in the clusters’organizational structure.To verify the performance of the model,we conducted a simulation on a real-world scenario for multiple metrics related to clusters’stability.The results show that compared with the APROVE and the GAPC,SDPC showed clear performance advantages,indicating that SDPC can effectively ensure VANETs’cluster stability in urban scenarios.展开更多
We propose a robust earthquake clustering method:the Bayesian Gaussian mixture model with nearest-neighbor distance(BGMM-NND)algorithm.Unlike the conventional nearest neighbor distance method,the BGMM-NND algorithm el...We propose a robust earthquake clustering method:the Bayesian Gaussian mixture model with nearest-neighbor distance(BGMM-NND)algorithm.Unlike the conventional nearest neighbor distance method,the BGMM-NND algorithm eliminates the need for hyperparameter tuning or reliance on fixed thresholds,offering enhanced flexibility for clustering across varied seismic scales.By integrating cumulative probability and BGMM with principal component analysis(PCA),the BGMM-NND algorithm effectively distinguishes between background and triggered earthquakes while maintaining the magnitude component and resolving the issue of excessively large spatial cluster domains.We apply the BGMM-NND algorithm to the Sichuan–Yunnan seismic catalog from 1971 to 2024,revealing notable variations in earthquake frequency,triggering characteristics,and recurrence patterns across different fault zones.Distinct clustering and triggering behaviors are identified along different segments of the Longmenshan Fault.Multiple seismic modes,namely,the short-distance mode,the medium-distance mode,the repeating-like mode,the uniform background mode,and the Wenchuan mode,are uncovered.The algorithm's flexibility and robust performance in earthquake clustering makes it a valuable tool for exploring seismicity characteristics,offering new insights into earthquake clustering and the spatiotemporal patterns of seismic activity.展开更多
Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device ...Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device energy utilization.To tackle these challenges,this study proposes an optimal scheduling model for energy storage power plants based on edge computing and the improved whale optimization algorithm(IWOA).The proposed model designs an edge computing framework,transferring a large share of data processing and storage tasks to the network edge.This architecture effectively reduces transmission costs by minimizing data travel time.In addition,the model considers demand response strategies and builds an objective function based on the minimization of the sum of electricity purchase cost and operation cost.The IWOA enhances the optimization process by utilizing adaptive weight adjustments and an optimal neighborhood perturbation strategy,preventing the algorithm from converging to suboptimal solutions.Experimental results demonstrate that the proposed scheduling model maximizes the flexibility of the energy storage plant,facilitating efficient charging and discharging.It successfully achieves peak shaving and valley filling for both electrical and heat loads,promoting the effective utilization of renewable energy sources.The edge-computing framework significantly reduces transmission delays between energy devices.Furthermore,IWOA outperforms traditional algorithms in optimizing the objective function.展开更多
Hydrocracking is one of the most important petroleum refining processes that converts heavy oils into gases,naphtha,diesel,and other products through cracking reactions.Multi-objective optimization algorithms can help...Hydrocracking is one of the most important petroleum refining processes that converts heavy oils into gases,naphtha,diesel,and other products through cracking reactions.Multi-objective optimization algorithms can help refining enterprises determine the optimal operating parameters to maximize product quality while ensuring product yield,or to increase product yield while reducing energy consumption.This paper presents a multi-objective optimization scheme for hydrocracking based on an improved SPEA2-PE algorithm,which combines path evolution operator and adaptive step strategy to accelerate the convergence speed and improve the computational accuracy of the algorithm.The reactor model used in this article is simulated based on a twenty-five lumped kinetic model.Through model and test function verification,the proposed optimization scheme exhibits significant advantages in the multiobjective optimization process of hydrocracking.展开更多
In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clu...In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay.展开更多
The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level m...The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level model and technology called Spatial Grasp for dealing with large distributed systems,which can provide spatial vision,awareness,management,control,and even consciousness.The technology description includes its key Spatial Grasp Language(SGL),self-evolution of recursive SGL scenarios,and implementation of SGL interpreter converting distributed networked systems into powerful spatial engines.Examples of typical spatial scenarios in SGL include finding shortest path tree and shortest path between network nodes,collecting proper information throughout the whole world,elimination of multiple targets by intelligent teams of chasers,and withstanding cyber attacks in distributed networked systems.Also this paper compares Spatial Grasp model with traditional algorithms,confirming universality of the former for any spatial systems,while the latter just tools for concrete applications.展开更多
To accomplish the reliability analyses of the correlation of multi-analytical objectives,an innovative framework of Dimensional Synchronous Modeling(DSM)and correlation analysis is developed based on the stepwise mode...To accomplish the reliability analyses of the correlation of multi-analytical objectives,an innovative framework of Dimensional Synchronous Modeling(DSM)and correlation analysis is developed based on the stepwise modeling strategy,cell array operation principle,and Copula theory.Under this framework,we propose a DSM-based Enhanced Kriging(DSMEK)algorithm to synchronously derive the modeling of multi-objective,and explore an adaptive Copula function approach to analyze the correlation among multiple objectives and to assess the synthetical reliability level.In the proposed DSMEK and adaptive Copula methods,the Kriging model is treated as the basis function of DSMEK model,the Multi-Objective Snake Optimizer(MOSO)algorithm is used to search the optimal values of hyperparameters of basis functions,the cell array operation principle is adopted to establish a whole model of multiple objectives,the goodness of fit is utilized to determine the forms of Copula functions,and the determined Copula functions are employed to perform the reliability analyses of the correlation of multi-analytical objectives.Furthermore,three examples,including multi-objective complex function approximation,aeroengine turbine bladeddisc multi-failure mode reliability analyses and aircraft landing gear system brake temperature reliability analyses,are performed to verify the effectiveness of the proposed methods,from the viewpoints of mathematics and engineering.The results show that the DSMEK and adaptive Copula approaches hold obvious advantages in terms of modeling features and simulation performance.The efforts of this work provide a useful way for the modeling of multi-analytical objectives and synthetical reliability analyses of complex structure/system with multi-output responses.展开更多
Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,curr...Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.展开更多
This study was aimed to prepare landslide susceptibility maps for the Pithoragarh district in Uttarakhand,India,using advanced ensemble models that combined Radial Basis Function Networks(RBFN)with three ensemble lear...This study was aimed to prepare landslide susceptibility maps for the Pithoragarh district in Uttarakhand,India,using advanced ensemble models that combined Radial Basis Function Networks(RBFN)with three ensemble learning techniques:DAGGING(DG),MULTIBOOST(MB),and ADABOOST(AB).This combination resulted in three distinct ensemble models:DG-RBFN,MB-RBFN,and AB-RBFN.Additionally,a traditional weighted method,Information Value(IV),and a benchmark machine learning(ML)model,Multilayer Perceptron Neural Network(MLP),were employed for comparison and validation.The models were developed using ten landslide conditioning factors,which included slope,aspect,elevation,curvature,land cover,geomorphology,overburden depth,lithology,distance to rivers and distance to roads.These factors were instrumental in predicting the output variable,which was the probability of landslide occurrence.Statistical analysis of the models’performance indicated that the DG-RBFN model,with an Area Under ROC Curve(AUC)of 0.931,outperformed the other models.The AB-RBFN model achieved an AUC of 0.929,the MB-RBFN model had an AUC of 0.913,and the MLP model recorded an AUC of 0.926.These results suggest that the advanced ensemble ML model DG-RBFN was more accurate than traditional statistical model,single MLP model,and other ensemble models in preparing trustworthy landslide susceptibility maps,thereby enhancing land use planning and decision-making.展开更多
In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms...In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms to solve the problem of multi-UAV path planning.The Dung Beetle Optimization(DBO)algorithm has been widely applied due to its diverse search patterns in the above algorithms.However,the update strategies for the rolling and thieving dung beetles of the DBO algorithm are overly simplistic,potentially leading to an inability to fully explore the search space and a tendency to converge to local optima,thereby not guaranteeing the discovery of the optimal path.To address these issues,we propose an improved DBO algorithm guided by the Landmark Operator(LODBO).Specifically,we first use tent mapping to update the population strategy,which enables the algorithm to generate initial solutions with enhanced diversity within the search space.Second,we expand the search range of the rolling ball dung beetle by using the landmark factor.Finally,by using the adaptive factor that changes with the number of iterations.,we improve the global search ability of the stealing dung beetle,making it more likely to escape from local optima.To verify the effectiveness of the proposed method,extensive simulation experiments are conducted,and the result shows that the LODBO algorithm can obtain the optimal path using the shortest time compared with the Genetic Algorithm(GA),the Gray Wolf Optimizer(GWO),the Whale Optimization Algorithm(WOA)and the original DBO algorithm in the disaster search and rescue task set.展开更多
We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpr...We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.展开更多
In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and t...In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching.展开更多
Conducting predictability studies is essential for tracing the source of forecast errors,which not only leads to the improvement of observation and forecasting systems,but also enhances the understanding of weather an...Conducting predictability studies is essential for tracing the source of forecast errors,which not only leads to the improvement of observation and forecasting systems,but also enhances the understanding of weather and climate phenomena.In the past few decades,dynamical numerical models have been the primary tools for predictability studies,achieving significant progress.Nowadays,with the advances in artificial intelligence(AI)techniques and accumulations of vast meteorological data,modeling weather and climate events using modern data-driven approaches is becoming trendy,where FourCastNet,Pangu-Weather,and GraphCast are successful pioneers.In this perspective article,we suggest AI models should not be limited to forecasting but be expanded to predictability studies,leveraging AI's advantages of high efficiency and self-contained optimization modules.To this end,we first remark that AI models should possess high simulation capability with fine spatiotemporal resolution for two kinds of predictability studies.AI models with high simulation capabilities comparable to numerical models can be considered to provide solutions to partial differential equations in a data-driven way.Then,we highlight several specific predictability issues with well-determined nonlinear optimization formulizations,which can be well-studied using AI models,holding significant scientific value.In addition,we advocate for the incorporation of AI models into the synergistic cycle of the cognition–observation–model paradigm.Comprehensive predictability studies have the potential to transform“big data”to“big and better data”and shift the focus from“AI for forecasts”to“AI for science”,ultimately advancing the development of the atmospheric and oceanic sciences.展开更多
基金supported by the National Natural Science Foundation of China[grant number 51775020]the Science Challenge Project[grant number.TZ2018007]+2 种基金the National Natural Science Foundation of China[grant number 62073009]the Postdoctoral Fellowship Program of CPSF[grant number GZC20233365]the Fundamental Research Funds for Central Universities[grant number JKF-20240559].
文摘Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,the robustness of solutions obtained by these methods faces great challenges when the complexity of the PV model increases.The unstable results will affect the reliable operation and maintenance strategies of PV systems.In response to this challenge,an improved rime optimization algorithm with enhanced exploration and exploitation,termed TERIME,is proposed for robust and accurate parameter identification for various PV models.Specifically,the differential evolution mutation operator is integrated in the exploration phase to enhance the population diversity.Meanwhile,a new exploitation strategy incorporating randomization and neighborhood strategies simultaneously is developed to maintain the balance of exploitation width and depth.The TERIME algorithm is applied to estimate the optimal parameters of the single diode model,double diode model,and triple diode model combined with the Lambert-W function for three PV cell and module types including RTC France,Photo Watt-PWP 201 and S75.According to the statistical analysis in 100 runs,the proposed algorithm achieves more accurate and robust parameter estimations than other techniques to various PV models in varying environmental conditions.All of our source codes are publicly available at https://github.com/dirge1/TERIME.
基金financial support from the National Key R&D Program of China(Grant No.2020YFB1711100).
文摘To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is placed on the critical components of material and heat balance.Through a thorough analysis of the interactions between various components and energy consumptions,six pivotal factors have been identified—raw material composition,steel type,steel temperature,slag temperature,recycling practices,and operational parameters.Utilizing a framework based on an equivalent energy consumption model,an integrated intelligent diagnostic model has been developed that encapsulates these factors,providing a comprehensive assessment tool for converter energy consumption.Employing the K-means clustering algorithm,historical operational data from the converter have been meticulously analyzed to determine baseline values for essential variables such as energy consumption and recovery rates.Building upon this data-driven foundation,an innovative online system for the intelligent diagnosis of converter energy consumption has been crafted and implemented,enhancing the precision and efficiency of energy management.Upon implementation with energy consumption data at a steel plant in 2023,the diagnostic analysis performed by the system exposed significant variations in energy usage across different converter units.The analysis revealed that the most significant factor influencing the variation in energy consumption for both furnaces was the steel grade,with contributions of−0.550 and 0.379.
基金funded by Anhui NARI ZT Electric Co.,Ltd.,entitled“Research on the Shared Operation and Maintenance Service Model for Metering Equipment and Platform Development for the Modern Industrial Chain”(Grant No.524636250005).
文摘With the rapid adoption of artificial intelligence(AI)in domains such as power,transportation,and finance,the number of machine learning and deep learning models has grown exponentially.However,challenges such as delayed retraining,inconsistent version management,insufficient drift monitoring,and limited data security still hinder efficient and reliable model operations.To address these issues,this paper proposes the Intelligent Model Lifecycle Management Algorithm(IMLMA).The algorithm employs a dual-trigger mechanism based on both data volume thresholds and time intervals to automate retraining,and applies Bayesian optimization for adaptive hyperparameter tuning to improve performance.A multi-metric replacement strategy,incorporating MSE,MAE,and R2,ensures that new models replace existing ones only when performance improvements are guaranteed.A versioning and traceability database supports comparison and visualization,while real-time monitoring with stability analysis enables early warnings of latency and drift.Finally,hash-based integrity checks secure both model files and datasets.Experimental validation in a power metering operation scenario demonstrates that IMLMA reduces model update delays,enhances predictive accuracy and stability,and maintains low latency under high concurrency.This work provides a practical,reusable,and scalable solution for intelligent model lifecycle management,with broad applicability to complex systems such as smart grids.
文摘The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of geostatistical analysis,particularly in mineral exploration.The study combines GA and machine learning to optimise variogram parameters,including range,sill,and nugget,by minimising the root mean square error(RMSE)and maximising the coefficient of determination(R^(2)).The experimental variograms were computed and modelled using theoretical models,followed by optimisation via evolutionary algorithms.The method was applied to gravity data from the Ngoura-Batouri-Kette mining district in Eastern Cameroon,covering 141 data points.Sequential Gaussian Simulations(SGS)were employed for predictive mapping to validate simulated results against true values.Key findings show variograms with ranges between 24.71 km and 49.77 km,opti-mised RMSE and R^(2) values of 11.21 mGal^(2) and 0.969,respectively,after 42 generations of GA optimisation.Predictive mapping using SGS demonstrated that simulated values closely matched true values,with the simu-lated mean at 21.75 mGal compared to the true mean of 25.16 mGal,and variances of 465.70 mGal^(2) and 555.28 mGal^(2),respectively.The results confirmed spatial variability and anisotropies in the N170-N210 directions,consistent with prior studies.This work presents a novel integration of GA and machine learning for variogram modelling,offering an automated,efficient approach to parameter estimation.The methodology significantly enhances predictive geostatistical models,contributing to the advancement of mineral exploration and improving the precision and speed of decision-making in the petroleum and mining industries.
基金supported by the fundings from 2024 Young Talents Program for Science and Technology Thinking Tanks(No.XMSB20240711041)2024 Student Research Program on Dynamic Simulation and Force-on-Force Exercise of Nuclear Security in 3D Interactive Environment Using Reinforcement Learning,Natural Science Foundation of Top Talent of SZTU(No.GDRC202407)+2 种基金Shenzhen Science and Technology Program(No.KCXFZ20240903092603005)Shenzhen Science and Technology Program(No.JCYJ20241202124703004)Shenzhen Science and Technology Program(No.KJZD20230923114117032)。
文摘Vulnerability assessment is a systematic process to identify security gaps in the design and evaluation of physical protection systems.Adversarial path planning is a widely used method for identifying potential vulnerabilities and threats to the security and resilience of critical infrastructures.However,achieving efficient path optimization in complex large-scale three-dimensional(3D)scenes remains a significant challenge for vulnerability assessment.This paper introduces a novel A^(*)-algorithmic framework for 3D security modeling and vulnerability assessment.Within this framework,the 3D facility models were first developed in 3ds Max and then incorporated into Unity for A^(*)heuristic pathfinding.The A^(*)-heuristic pathfinding algorithm was implemented with a geometric probability model to refine the detection and distance fields and achieve a rational approximation of the cost to reach the goal.An admissible heuristic is ensured by incorporating the minimum probability of detection(P_(D)^(min))and diagonal distance to estimate the heuristic function.The 3D A^(*)heuristic search was demonstrated using a hypothetical laboratory facility,where a comparison was also carried out between the A^(*)and Dijkstra algorithms for optimal path identification.Comparative results indicate that the proposed A^(*)-heuristic algorithm effectively identifies the most vulnerable adversarial pathfinding with high efficiency.Finally,the paper discusses hidden phenomena and open issues in efficient 3D pathfinding for security applications.
基金supported by the National Key Research and Development Program of China(2023YFB3307801)the National Natural Science Foundation of China(62394343,62373155,62073142)+3 种基金Major Science and Technology Project of Xinjiang(No.2022A01006-4)the Programme of Introducing Talents of Discipline to Universities(the 111 Project)under Grant B17017the Fundamental Research Funds for the Central Universities,Science Foundation of China University of Petroleum,Beijing(No.2462024YJRC011)the Open Research Project of the State Key Laboratory of Industrial Control Technology,China(Grant No.ICT2024B70).
文摘The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.
基金the Chinese Clinical Trial Registry(No.ChiCTR2000040109)approved by the Hospital Ethics Committee(No.20210130017).
文摘BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intraoperative strategies.AIM To evaluate the predictive performance of machine learning(ML)algorithms for DCI by comparing three modeling approaches,identify factors influencing DCI,and develop a preoperative prediction model using ML algorithms to enhance colonoscopy quality and efficiency.METHODS This cross-sectional study enrolled 712 patients who underwent colonoscopy at a tertiary hospital between June 2020 and May 2021.Demographic data,past medical history,medication use,and psychological status were collected.The endoscopist assessed DCI using the visual analogue scale.After univariate screening,predictive models were developed using multivariable logistic regression,least absolute shrinkage and selection operator(LASSO)regression,and random forest(RF)algorithms.Model performance was evaluated based on discrimination,calibration,and decision curve analysis(DCA),and results were visualized using nomograms.RESULTS A total of 712 patients(53.8%male;mean age 54.5 years±12.9 years)were included.Logistic regression analysis identified constipation[odds ratio(OR)=2.254,95%confidence interval(CI):1.289-3.931],abdominal circumference(AC)(77.5–91.9 cm,OR=1.895,95%CI:1.065-3.350;AC≥92 cm,OR=1.271,95%CI:0.730-2.188),and anxiety(OR=1.071,95%CI:1.044-1.100)as predictive factors for DCI,validated by LASSO and RF methods.Model performance revealed training/validation sensitivities of 0.826/0.925,0.924/0.868,and 1.000/0.981;specificities of 0.602/0.511,0.510/0.562,and 0.977/0.526;and corresponding area under the receiver operating characteristic curves(AUCs)of 0.780(0.737-0.823)/0.726(0.654-0.799),0.754(0.710-0.798)/0.723(0.656-0.791),and 1.000(1.000-1.000)/0.754(0.688-0.820),respectively.DCA indicated optimal net benefit within probability thresholds of 0-0.9 and 0.05-0.37.The RF model demonstrated superior diagnostic accuracy,reflected by perfect training sensitivity(1.000)and highest validation AUC(0.754),outperforming other methods in clinical applicability.CONCLUSION The RF-based model exhibited superior predictive accuracy for DCI compared to multivariable logistic and LASSO regression models.This approach supports individualized preoperative optimization,enhancing colonoscopy quality through targeted risk stratification.
文摘Cluster-basedmodels have numerous application scenarios in vehicular ad-hoc networks(VANETs)and can greatly help improve the communication performance of VANETs.However,the frequent movement of vehicles can often lead to changes in the network topology,thereby reducing cluster stability in urban scenarios.To address this issue,we propose a clustering model based on the density peak clustering(DPC)method and sparrow search algorithm(SSA),named SDPC.First,the model constructs a fitness function based on the parameters obtained from the DPC method and deploys the SSA for iterative optimization to select cluster heads(CHs).Then,the vehicles that have not been selected as CHs are assigned to appropriate clusters by comprehensively considering the distance parameter and link-reliability parameter.Finally,cluster maintenance strategies are considered to tackle the changes in the clusters’organizational structure.To verify the performance of the model,we conducted a simulation on a real-world scenario for multiple metrics related to clusters’stability.The results show that compared with the APROVE and the GAPC,SDPC showed clear performance advantages,indicating that SDPC can effectively ensure VANETs’cluster stability in urban scenarios.
基金supported by the National Key Research and Development Program of China(Grant Nos.2021YFC3000705 and 2021YFC3000705-05)the National Natural Science Foundation of China(Grant No.42074049)the Youth Innovation Promotion Association of the Chinese Academy of Sciences(Grant No.2023471).
文摘We propose a robust earthquake clustering method:the Bayesian Gaussian mixture model with nearest-neighbor distance(BGMM-NND)algorithm.Unlike the conventional nearest neighbor distance method,the BGMM-NND algorithm eliminates the need for hyperparameter tuning or reliance on fixed thresholds,offering enhanced flexibility for clustering across varied seismic scales.By integrating cumulative probability and BGMM with principal component analysis(PCA),the BGMM-NND algorithm effectively distinguishes between background and triggered earthquakes while maintaining the magnitude component and resolving the issue of excessively large spatial cluster domains.We apply the BGMM-NND algorithm to the Sichuan–Yunnan seismic catalog from 1971 to 2024,revealing notable variations in earthquake frequency,triggering characteristics,and recurrence patterns across different fault zones.Distinct clustering and triggering behaviors are identified along different segments of the Longmenshan Fault.Multiple seismic modes,namely,the short-distance mode,the medium-distance mode,the repeating-like mode,the uniform background mode,and the Wenchuan mode,are uncovered.The algorithm's flexibility and robust performance in earthquake clustering makes it a valuable tool for exploring seismicity characteristics,offering new insights into earthquake clustering and the spatiotemporal patterns of seismic activity.
基金supported by the Changzhou Science and Technology Support Project(CE20235045)Open Subject of Jiangsu Province Key Laboratory of Power Transmission and Distribution(2021JSSPD12)+1 种基金Talent Projects of Jiangsu University of Technology(KYY20018)Postgraduate Research&Practice Innovation Program of Jiangsu Province(SJCX23_1633).
文摘Energy storage power plants are critical in balancing power supply and demand.However,the scheduling of these plants faces significant challenges,including high network transmission costs and inefficient inter-device energy utilization.To tackle these challenges,this study proposes an optimal scheduling model for energy storage power plants based on edge computing and the improved whale optimization algorithm(IWOA).The proposed model designs an edge computing framework,transferring a large share of data processing and storage tasks to the network edge.This architecture effectively reduces transmission costs by minimizing data travel time.In addition,the model considers demand response strategies and builds an objective function based on the minimization of the sum of electricity purchase cost and operation cost.The IWOA enhances the optimization process by utilizing adaptive weight adjustments and an optimal neighborhood perturbation strategy,preventing the algorithm from converging to suboptimal solutions.Experimental results demonstrate that the proposed scheduling model maximizes the flexibility of the energy storage plant,facilitating efficient charging and discharging.It successfully achieves peak shaving and valley filling for both electrical and heat loads,promoting the effective utilization of renewable energy sources.The edge-computing framework significantly reduces transmission delays between energy devices.Furthermore,IWOA outperforms traditional algorithms in optimizing the objective function.
基金supported by National Key Research and Development Program of China (2023YFB3307800)National Natural Science Foundation of China (Key Program: 62136003, 62373155)+1 种基金Major Science and Technology Project of Xinjiang (No. 2022A01006-4)the Fundamental Research Funds for the Central Universities。
文摘Hydrocracking is one of the most important petroleum refining processes that converts heavy oils into gases,naphtha,diesel,and other products through cracking reactions.Multi-objective optimization algorithms can help refining enterprises determine the optimal operating parameters to maximize product quality while ensuring product yield,or to increase product yield while reducing energy consumption.This paper presents a multi-objective optimization scheme for hydrocracking based on an improved SPEA2-PE algorithm,which combines path evolution operator and adaptive step strategy to accelerate the convergence speed and improve the computational accuracy of the algorithm.The reactor model used in this article is simulated based on a twenty-five lumped kinetic model.Through model and test function verification,the proposed optimization scheme exhibits significant advantages in the multiobjective optimization process of hydrocracking.
基金partially supported by the National Natural Science Foundation of China(62161016)the Key Research and Development Project of Lanzhou Jiaotong University(ZDYF2304)+1 种基金the Beijing Engineering Research Center of Highvelocity Railway Broadband Mobile Communications(BHRC-2022-1)Beijing Jiaotong University。
文摘In order to solve the problems of short network lifetime and high data transmission delay in data gathering for wireless sensor network(WSN)caused by uneven energy consumption among nodes,a hybrid energy efficient clustering routing base on firefly and pigeon-inspired algorithm(FF-PIA)is proposed to optimise the data transmission path.After having obtained the optimal number of cluster head node(CH),its result might be taken as the basis of producing the initial population of FF-PIA algorithm.The L′evy flight mechanism and adaptive inertia weighting are employed in the algorithm iteration to balance the contradiction between the global search and the local search.Moreover,a Gaussian perturbation strategy is applied to update the optimal solution,ensuring the algorithm can jump out of the local optimal solution.And,in the WSN data gathering,a onedimensional signal reconstruction algorithm model is developed by dilated convolution and residual neural networks(DCRNN).We conducted experiments on the National Oceanic and Atmospheric Administration(NOAA)dataset.It shows that the DCRNN modeldriven data reconstruction algorithm improves the reconstruction accuracy as well as the reconstruction time performance.FF-PIA and DCRNN clustering routing co-simulation reveals that the proposed algorithm can effectively improve the performance in extending the network lifetime and reducing data transmission delay.
文摘The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level model and technology called Spatial Grasp for dealing with large distributed systems,which can provide spatial vision,awareness,management,control,and even consciousness.The technology description includes its key Spatial Grasp Language(SGL),self-evolution of recursive SGL scenarios,and implementation of SGL interpreter converting distributed networked systems into powerful spatial engines.Examples of typical spatial scenarios in SGL include finding shortest path tree and shortest path between network nodes,collecting proper information throughout the whole world,elimination of multiple targets by intelligent teams of chasers,and withstanding cyber attacks in distributed networked systems.Also this paper compares Spatial Grasp model with traditional algorithms,confirming universality of the former for any spatial systems,while the latter just tools for concrete applications.
基金co-supported by the National Natural Science Foundation of China(Nos.52405293,52375237)China Postdoctoral Science Foundation(No.2024M754219)Shaanxi Province Postdoctoral Research Project Funding,China。
文摘To accomplish the reliability analyses of the correlation of multi-analytical objectives,an innovative framework of Dimensional Synchronous Modeling(DSM)and correlation analysis is developed based on the stepwise modeling strategy,cell array operation principle,and Copula theory.Under this framework,we propose a DSM-based Enhanced Kriging(DSMEK)algorithm to synchronously derive the modeling of multi-objective,and explore an adaptive Copula function approach to analyze the correlation among multiple objectives and to assess the synthetical reliability level.In the proposed DSMEK and adaptive Copula methods,the Kriging model is treated as the basis function of DSMEK model,the Multi-Objective Snake Optimizer(MOSO)algorithm is used to search the optimal values of hyperparameters of basis functions,the cell array operation principle is adopted to establish a whole model of multiple objectives,the goodness of fit is utilized to determine the forms of Copula functions,and the determined Copula functions are employed to perform the reliability analyses of the correlation of multi-analytical objectives.Furthermore,three examples,including multi-objective complex function approximation,aeroengine turbine bladeddisc multi-failure mode reliability analyses and aircraft landing gear system brake temperature reliability analyses,are performed to verify the effectiveness of the proposed methods,from the viewpoints of mathematics and engineering.The results show that the DSMEK and adaptive Copula approaches hold obvious advantages in terms of modeling features and simulation performance.The efforts of this work provide a useful way for the modeling of multi-analytical objectives and synthetical reliability analyses of complex structure/system with multi-output responses.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant(No.51677058).
文摘Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.
基金the University of Transport Technology under the project entitled“Application of Machine Learning Algorithms in Landslide Susceptibility Mapping in Mountainous Areas”with grant number DTTD2022-16.
文摘This study was aimed to prepare landslide susceptibility maps for the Pithoragarh district in Uttarakhand,India,using advanced ensemble models that combined Radial Basis Function Networks(RBFN)with three ensemble learning techniques:DAGGING(DG),MULTIBOOST(MB),and ADABOOST(AB).This combination resulted in three distinct ensemble models:DG-RBFN,MB-RBFN,and AB-RBFN.Additionally,a traditional weighted method,Information Value(IV),and a benchmark machine learning(ML)model,Multilayer Perceptron Neural Network(MLP),were employed for comparison and validation.The models were developed using ten landslide conditioning factors,which included slope,aspect,elevation,curvature,land cover,geomorphology,overburden depth,lithology,distance to rivers and distance to roads.These factors were instrumental in predicting the output variable,which was the probability of landslide occurrence.Statistical analysis of the models’performance indicated that the DG-RBFN model,with an Area Under ROC Curve(AUC)of 0.931,outperformed the other models.The AB-RBFN model achieved an AUC of 0.929,the MB-RBFN model had an AUC of 0.913,and the MLP model recorded an AUC of 0.926.These results suggest that the advanced ensemble ML model DG-RBFN was more accurate than traditional statistical model,single MLP model,and other ensemble models in preparing trustworthy landslide susceptibility maps,thereby enhancing land use planning and decision-making.
基金supported by the National Natural Science Foundation of China(No.62373027).
文摘In disaster relief operations,multiple UAVs can be used to search for trapped people.In recent years,many researchers have proposed machine le arning-based algorithms,sampling-based algorithms,and heuristic algorithms to solve the problem of multi-UAV path planning.The Dung Beetle Optimization(DBO)algorithm has been widely applied due to its diverse search patterns in the above algorithms.However,the update strategies for the rolling and thieving dung beetles of the DBO algorithm are overly simplistic,potentially leading to an inability to fully explore the search space and a tendency to converge to local optima,thereby not guaranteeing the discovery of the optimal path.To address these issues,we propose an improved DBO algorithm guided by the Landmark Operator(LODBO).Specifically,we first use tent mapping to update the population strategy,which enables the algorithm to generate initial solutions with enhanced diversity within the search space.Second,we expand the search range of the rolling ball dung beetle by using the landmark factor.Finally,by using the adaptive factor that changes with the number of iterations.,we improve the global search ability of the stealing dung beetle,making it more likely to escape from local optima.To verify the effectiveness of the proposed method,extensive simulation experiments are conducted,and the result shows that the LODBO algorithm can obtain the optimal path using the shortest time compared with the Genetic Algorithm(GA),the Gray Wolf Optimizer(GWO),the Whale Optimization Algorithm(WOA)and the original DBO algorithm in the disaster search and rescue task set.
基金supported by National Key Research and Development Program (2019YFA0708301)National Natural Science Foundation of China (51974337)+2 种基金the Strategic Cooperation Projects of CNPC and CUPB (ZLZX2020-03)Science and Technology Innovation Fund of CNPC (2021DQ02-0403)Open Fund of Petroleum Exploration and Development Research Institute of CNPC (2022-KFKT-09)
文摘We propose an integrated method of data-driven and mechanism models for well logging formation evaluation,explicitly focusing on predicting reservoir parameters,such as porosity and water saturation.Accurately interpreting these parameters is crucial for effectively exploring and developing oil and gas.However,with the increasing complexity of geological conditions in this industry,there is a growing demand for improved accuracy in reservoir parameter prediction,leading to higher costs associated with manual interpretation.The conventional logging interpretation methods rely on empirical relationships between logging data and reservoir parameters,which suffer from low interpretation efficiency,intense subjectivity,and suitability for ideal conditions.The application of artificial intelligence in the interpretation of logging data provides a new solution to the problems existing in traditional methods.It is expected to improve the accuracy and efficiency of the interpretation.If large and high-quality datasets exist,data-driven models can reveal relationships of arbitrary complexity.Nevertheless,constructing sufficiently large logging datasets with reliable labels remains challenging,making it difficult to apply data-driven models effectively in logging data interpretation.Furthermore,data-driven models often act as“black boxes”without explaining their predictions or ensuring compliance with primary physical constraints.This paper proposes a machine learning method with strong physical constraints by integrating mechanism and data-driven models.Prior knowledge of logging data interpretation is embedded into machine learning regarding network structure,loss function,and optimization algorithm.We employ the Physically Informed Auto-Encoder(PIAE)to predict porosity and water saturation,which can be trained without labeled reservoir parameters using self-supervised learning techniques.This approach effectively achieves automated interpretation and facilitates generalization across diverse datasets.
基金Supported by the Natural Science Foundation of Chongqing(General Program,NO.CSTB2022NSCQ-MSX0884)Discipline Teaching Special Project of Yangtze Normal University(csxkjx14)。
文摘In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching.
基金in part supported by the National Natural Science Foundation of China(Grant Nos.42288101,42405147 and 42475054)in part by the China National Postdoctoral Program for Innovative Talents(Grant No.BX20230071)。
文摘Conducting predictability studies is essential for tracing the source of forecast errors,which not only leads to the improvement of observation and forecasting systems,but also enhances the understanding of weather and climate phenomena.In the past few decades,dynamical numerical models have been the primary tools for predictability studies,achieving significant progress.Nowadays,with the advances in artificial intelligence(AI)techniques and accumulations of vast meteorological data,modeling weather and climate events using modern data-driven approaches is becoming trendy,where FourCastNet,Pangu-Weather,and GraphCast are successful pioneers.In this perspective article,we suggest AI models should not be limited to forecasting but be expanded to predictability studies,leveraging AI's advantages of high efficiency and self-contained optimization modules.To this end,we first remark that AI models should possess high simulation capability with fine spatiotemporal resolution for two kinds of predictability studies.AI models with high simulation capabilities comparable to numerical models can be considered to provide solutions to partial differential equations in a data-driven way.Then,we highlight several specific predictability issues with well-determined nonlinear optimization formulizations,which can be well-studied using AI models,holding significant scientific value.In addition,we advocate for the incorporation of AI models into the synergistic cycle of the cognition–observation–model paradigm.Comprehensive predictability studies have the potential to transform“big data”to“big and better data”and shift the focus from“AI for forecasts”to“AI for science”,ultimately advancing the development of the atmospheric and oceanic sciences.