Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,...Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,the robustness of solutions obtained by these methods faces great challenges when the complexity of the PV model increases.The unstable results will affect the reliable operation and maintenance strategies of PV systems.In response to this challenge,an improved rime optimization algorithm with enhanced exploration and exploitation,termed TERIME,is proposed for robust and accurate parameter identification for various PV models.Specifically,the differential evolution mutation operator is integrated in the exploration phase to enhance the population diversity.Meanwhile,a new exploitation strategy incorporating randomization and neighborhood strategies simultaneously is developed to maintain the balance of exploitation width and depth.The TERIME algorithm is applied to estimate the optimal parameters of the single diode model,double diode model,and triple diode model combined with the Lambert-W function for three PV cell and module types including RTC France,Photo Watt-PWP 201 and S75.According to the statistical analysis in 100 runs,the proposed algorithm achieves more accurate and robust parameter estimations than other techniques to various PV models in varying environmental conditions.All of our source codes are publicly available at https://github.com/dirge1/TERIME.展开更多
To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is pla...To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is placed on the critical components of material and heat balance.Through a thorough analysis of the interactions between various components and energy consumptions,six pivotal factors have been identified—raw material composition,steel type,steel temperature,slag temperature,recycling practices,and operational parameters.Utilizing a framework based on an equivalent energy consumption model,an integrated intelligent diagnostic model has been developed that encapsulates these factors,providing a comprehensive assessment tool for converter energy consumption.Employing the K-means clustering algorithm,historical operational data from the converter have been meticulously analyzed to determine baseline values for essential variables such as energy consumption and recovery rates.Building upon this data-driven foundation,an innovative online system for the intelligent diagnosis of converter energy consumption has been crafted and implemented,enhancing the precision and efficiency of energy management.Upon implementation with energy consumption data at a steel plant in 2023,the diagnostic analysis performed by the system exposed significant variations in energy usage across different converter units.The analysis revealed that the most significant factor influencing the variation in energy consumption for both furnaces was the steel grade,with contributions of−0.550 and 0.379.展开更多
With the rapid adoption of artificial intelligence(AI)in domains such as power,transportation,and finance,the number of machine learning and deep learning models has grown exponentially.However,challenges such as dela...With the rapid adoption of artificial intelligence(AI)in domains such as power,transportation,and finance,the number of machine learning and deep learning models has grown exponentially.However,challenges such as delayed retraining,inconsistent version management,insufficient drift monitoring,and limited data security still hinder efficient and reliable model operations.To address these issues,this paper proposes the Intelligent Model Lifecycle Management Algorithm(IMLMA).The algorithm employs a dual-trigger mechanism based on both data volume thresholds and time intervals to automate retraining,and applies Bayesian optimization for adaptive hyperparameter tuning to improve performance.A multi-metric replacement strategy,incorporating MSE,MAE,and R2,ensures that new models replace existing ones only when performance improvements are guaranteed.A versioning and traceability database supports comparison and visualization,while real-time monitoring with stability analysis enables early warnings of latency and drift.Finally,hash-based integrity checks secure both model files and datasets.Experimental validation in a power metering operation scenario demonstrates that IMLMA reduces model update delays,enhances predictive accuracy and stability,and maintains low latency under high concurrency.This work provides a practical,reusable,and scalable solution for intelligent model lifecycle management,with broad applicability to complex systems such as smart grids.展开更多
The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of...The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of geostatistical analysis,particularly in mineral exploration.The study combines GA and machine learning to optimise variogram parameters,including range,sill,and nugget,by minimising the root mean square error(RMSE)and maximising the coefficient of determination(R^(2)).The experimental variograms were computed and modelled using theoretical models,followed by optimisation via evolutionary algorithms.The method was applied to gravity data from the Ngoura-Batouri-Kette mining district in Eastern Cameroon,covering 141 data points.Sequential Gaussian Simulations(SGS)were employed for predictive mapping to validate simulated results against true values.Key findings show variograms with ranges between 24.71 km and 49.77 km,opti-mised RMSE and R^(2) values of 11.21 mGal^(2) and 0.969,respectively,after 42 generations of GA optimisation.Predictive mapping using SGS demonstrated that simulated values closely matched true values,with the simu-lated mean at 21.75 mGal compared to the true mean of 25.16 mGal,and variances of 465.70 mGal^(2) and 555.28 mGal^(2),respectively.The results confirmed spatial variability and anisotropies in the N170-N210 directions,consistent with prior studies.This work presents a novel integration of GA and machine learning for variogram modelling,offering an automated,efficient approach to parameter estimation.The methodology significantly enhances predictive geostatistical models,contributing to the advancement of mineral exploration and improving the precision and speed of decision-making in the petroleum and mining industries.展开更多
Accurate and reliable photovoltaic(PV)modeling is crucial for the performance evaluation,control,and optimization of PV systems.However,existing methods for PV parameter identification often suffer from limitations in...Accurate and reliable photovoltaic(PV)modeling is crucial for the performance evaluation,control,and optimization of PV systems.However,existing methods for PV parameter identification often suffer from limitations in accuracy and efficiency.To address these challenges,we propose an adaptive multi-learning cooperation search algorithm(AMLCSA)for efficient identification of unknown parameters in PV models.AMLCSA is a novel algorithm inspired by teamwork behaviors in modern enterprises.It enhances the original cooperation search algorithm in two key aspects:(i)an adaptive multi-learning strategy that dynamically adjusts search ranges using adaptive weights,allowing better individuals to focus on local exploitation while guiding poorer individuals toward global exploration;and(ii)a chaotic grouping reflection strategy that introduces chaotic sequences to enhance population diversity and improve search performance.The effectiveness of AMLCSA is demonstrated on single-diode,double-diode,and three PV-module models.Simulation results show that AMLCSA offers significant advantages in convergence,accuracy,and stability compared to existing state-of-the-art algorithms.展开更多
Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection me...Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.展开更多
Vulnerability assessment is a systematic process to identify security gaps in the design and evaluation of physical protection systems.Adversarial path planning is a widely used method for identifying potential vulner...Vulnerability assessment is a systematic process to identify security gaps in the design and evaluation of physical protection systems.Adversarial path planning is a widely used method for identifying potential vulnerabilities and threats to the security and resilience of critical infrastructures.However,achieving efficient path optimization in complex large-scale three-dimensional(3D)scenes remains a significant challenge for vulnerability assessment.This paper introduces a novel A^(*)-algorithmic framework for 3D security modeling and vulnerability assessment.Within this framework,the 3D facility models were first developed in 3ds Max and then incorporated into Unity for A^(*)heuristic pathfinding.The A^(*)-heuristic pathfinding algorithm was implemented with a geometric probability model to refine the detection and distance fields and achieve a rational approximation of the cost to reach the goal.An admissible heuristic is ensured by incorporating the minimum probability of detection(P_(D)^(min))and diagonal distance to estimate the heuristic function.The 3D A^(*)heuristic search was demonstrated using a hypothetical laboratory facility,where a comparison was also carried out between the A^(*)and Dijkstra algorithms for optimal path identification.Comparative results indicate that the proposed A^(*)-heuristic algorithm effectively identifies the most vulnerable adversarial pathfinding with high efficiency.Finally,the paper discusses hidden phenomena and open issues in efficient 3D pathfinding for security applications.展开更多
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie...The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.展开更多
BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intr...BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intraoperative strategies.AIM To evaluate the predictive performance of machine learning(ML)algorithms for DCI by comparing three modeling approaches,identify factors influencing DCI,and develop a preoperative prediction model using ML algorithms to enhance colonoscopy quality and efficiency.METHODS This cross-sectional study enrolled 712 patients who underwent colonoscopy at a tertiary hospital between June 2020 and May 2021.Demographic data,past medical history,medication use,and psychological status were collected.The endoscopist assessed DCI using the visual analogue scale.After univariate screening,predictive models were developed using multivariable logistic regression,least absolute shrinkage and selection operator(LASSO)regression,and random forest(RF)algorithms.Model performance was evaluated based on discrimination,calibration,and decision curve analysis(DCA),and results were visualized using nomograms.RESULTS A total of 712 patients(53.8%male;mean age 54.5 years±12.9 years)were included.Logistic regression analysis identified constipation[odds ratio(OR)=2.254,95%confidence interval(CI):1.289-3.931],abdominal circumference(AC)(77.5–91.9 cm,OR=1.895,95%CI:1.065-3.350;AC≥92 cm,OR=1.271,95%CI:0.730-2.188),and anxiety(OR=1.071,95%CI:1.044-1.100)as predictive factors for DCI,validated by LASSO and RF methods.Model performance revealed training/validation sensitivities of 0.826/0.925,0.924/0.868,and 1.000/0.981;specificities of 0.602/0.511,0.510/0.562,and 0.977/0.526;and corresponding area under the receiver operating characteristic curves(AUCs)of 0.780(0.737-0.823)/0.726(0.654-0.799),0.754(0.710-0.798)/0.723(0.656-0.791),and 1.000(1.000-1.000)/0.754(0.688-0.820),respectively.DCA indicated optimal net benefit within probability thresholds of 0-0.9 and 0.05-0.37.The RF model demonstrated superior diagnostic accuracy,reflected by perfect training sensitivity(1.000)and highest validation AUC(0.754),outperforming other methods in clinical applicability.CONCLUSION The RF-based model exhibited superior predictive accuracy for DCI compared to multivariable logistic and LASSO regression models.This approach supports individualized preoperative optimization,enhancing colonoscopy quality through targeted risk stratification.展开更多
Challenges in stratigraphic modeling arise from underground uncertainty.While borehole exploration is reliable,it remains sparse due to economic and site constraints.Electrical resistivity tomography(ERT)as a cost-eff...Challenges in stratigraphic modeling arise from underground uncertainty.While borehole exploration is reliable,it remains sparse due to economic and site constraints.Electrical resistivity tomography(ERT)as a cost-effective geophysical technique can acquire high-density data;however,uncertainty and nonuniqueness inherent in ERT impede its usage for stratigraphy identification.This paper integrates ERT and onsite observations for the first time to propose a novel method for characterizing stratigraphic profiles.The method consists of two steps:(1)ERT for prior knowledge:ERT data are processed by soft clustering using the Gaussian mixture model,followed by probability smoothing to quantify its depthdependent uncertainty;and(2)Observations for calibration:a spatial sequential Bayesian updating(SSBU)algorithm is developed to update the prior knowledge based on likelihoods derived from onsite observations,namely topsoil and boreholes.The effectiveness of the proposed method is validated through its application to a real slope site in Foshan,China.Comparative analysis with advanced borehole-driven methods highlights the superiority of incorporating ERT data in stratigraphic modeling,in terms of prediction accuracy at borehole locations and sensitivity to borehole data.Informed by ERT,reduced sensitivity to boreholes provides a fundamental solution to the longstanding challenge of sparse measurements.The paper further discusses the impact of ERT uncertainty on the proposed model using time-lapse measurements,the impact of model resolution,and applicability in engineering projects.This study,as a breakthrough in stratigraphic modeling,bridges gaps in combining geophysical and geotechnical data to address measurement sparsity and paves the way for more economical geotechnical exploration.展开更多
The work proposes a distributed Kalman filtering(KF)algorithm to track a time-varying unknown signal process for a stochastic regression model over network systems in a cooperative way.We provide the stability analysi...The work proposes a distributed Kalman filtering(KF)algorithm to track a time-varying unknown signal process for a stochastic regression model over network systems in a cooperative way.We provide the stability analysis of the proposed distributed KF algorithm without independent and stationary signal assumptions,which implies that the theoretical results are able to be applied to stochastic feedback systems.Note that the main difficulty of stability analysis lies in analyzing the properties of the product of non-independent and non-stationary random matrices involved in the error equation.We employ analysis techniques such as stochastic Lyapunov function,stability theory of stochastic systems,and algebraic graph theory to deal with the above issue.The stochastic spatio-temporal cooperative information condition shows the cooperative property of multiple sensors that even though any local sensor cannot track the time-varying unknown signal,the distributed KF algorithm can be utilized to finish the filtering task in a cooperative way.At last,we illustrate the property of the proposed distributed KF algorithm by a simulation example.展开更多
Gas-bearing volcanic reservoirs have been found in the deep Songliao Basin, China. Choosing proper interpretation parameters for log evaluation is difficult due to complicated mineral compositions and variable mineral...Gas-bearing volcanic reservoirs have been found in the deep Songliao Basin, China. Choosing proper interpretation parameters for log evaluation is difficult due to complicated mineral compositions and variable mineral contents. Based on the QAPF classification scheme given by IUGS, we propose a method to determine the mineral contents of volcanic rocks using log data and a genetic algorithm. According to the QAPF scheme, minerals in volcanic rocks are divided into five groups: Q(quartz), A (Alkaline feldspar), P (plagioclase), M (mafic) and F (feldspathoid). We propose a model called QAPM including porosity for the volumetric analysis of reservoirs. The log response equations for density, apparent neutron porosity, transit time, gamma ray and volume photoelectrical cross section index were first established with the mineral parameters obtained from the Schlumberger handbook of log mineral parameters. Then the volumes of the four minerals in the matrix were calculated using the genetic algorithm (GA). The calculated porosity, based on the interpretation parameters, can be compared with core porosity, and the rock names given in the paper based on QAPF classification according to the four mineral contents are compatible with those from the chemical analysis of the core samples.展开更多
In order to solve the problems of potential incident rescue on expressway networks, the opportunity cost-based method is used to establish a resource dispatch decision model. The model aims to dispatch the rescue reso...In order to solve the problems of potential incident rescue on expressway networks, the opportunity cost-based method is used to establish a resource dispatch decision model. The model aims to dispatch the rescue resources from the regional road networks and to obtain the location of the rescue depots and the numbers of service vehicles assigned for the potential incidents. Due to the computational complexity of the decision model, a scene decomposition algorithm is proposed. The algorithm decomposes the dispatch problem from various kinds of resources to a single resource, and determines the original scene of rescue resources based on the rescue requirements and the resource matrix. Finally, a convenient optimal dispatch scheme is obtained by decomposing each original scene and simplifying the objective function. To illustrate the application of the decision model and the algorithm, a case of the expressway network is studied on areas around Nanjing city in China and the results show that the model used and the algorithm proposed are appropriate.展开更多
A solution to compute the optimal path based on a single-line-single-directional(SLSD)road network model is proposed.Unlike the traditional road network model,in the SLSD conceptual model,being single-directional an...A solution to compute the optimal path based on a single-line-single-directional(SLSD)road network model is proposed.Unlike the traditional road network model,in the SLSD conceptual model,being single-directional and single-line style,a road is no longer a linkage of road nodes but abstracted as a network node.Similarly,a road node is abstracted as the linkage of two ordered single-directional roads.This model can describe turn restrictions,circular roads,and other real scenarios usually described using a super-graph.Then a computing framework for optimal path finding(OPF)is presented.It is proved that classical Dijkstra and A algorithms can be directly used for OPF computing of any real-world road networks by transferring a super-graph to an SLSD network.Finally,using Singapore road network data,the proposed conceptual model and its corresponding optimal path finding algorithms are validated using a two-step optimal path finding algorithm with a pre-computing strategy based on the SLSD road network.展开更多
Current dynamic finite element model updating methods are not efficient or restricted to the problem of local optima. To circumvent these, a novel updating method which integrates the meta-model and the genetic algori...Current dynamic finite element model updating methods are not efficient or restricted to the problem of local optima. To circumvent these, a novel updating method which integrates the meta-model and the genetic algorithm is proposed. Experimental design technique is used to determine the best sampling points for the estimation of polynomial coefficients given the order and the number of independent variables. Finite element analyses are performed to generate the sampling data. Regression analysis is then used to estimate the response surface model to approximate the functional relationship between response features and design parameters on the entire design space. In the fitness evaluation of the genetic algorithm, the response surface model is used to substitute the finite element model to output features with given design parameters for the computation of fitness for the individual. Finally, the global optima that corresponds to the updated design parameter is acquired after several generations of evolution. In the application example, finite element analysis and modal testing are performed on a real chassis model. The finite element model is updated using the proposed method. After updating, root-mean-square error of modal frequencies is smaller than 2%. Furthermore, prediction ability of the updated model is validated using the testing results of the modified structure. The root-mean-square error of the prediction errors is smaller than 2%.展开更多
To realize automatic modeling and dynamic simulation of the educational assembling-type robot with open structure, a general dynamic model for the educational assembling-type robot and a fast simulation algorithm are ...To realize automatic modeling and dynamic simulation of the educational assembling-type robot with open structure, a general dynamic model for the educational assembling-type robot and a fast simulation algorithm are put forward. First, the educational robot system is abstracted to a multibody system and a general dynamic model of the educational robot is constructed by the Newton-Euler method. Then the dynamic model is simplified by a combination of components with fixed connections according to the structural characteristics of the educational robot. Secondly, in order to obtain a high efficiency simulation algorithm, based on the sparse matrix technique, the augmentation algorithm and the direct projective constraint stabilization algorithm are improved. Finally, a numerical example is given. The results show that the model and the fast algorithm are valid and effective. This study lays a dynamic foundation for realizing the simulation platform of the educational robot.展开更多
A new arrival and departure flight classification method based on the transitive closure algorithm (TCA) is proposed. Firstly, the fuzzy set theory and the transitive closure algorithm are introduced. Then four diff...A new arrival and departure flight classification method based on the transitive closure algorithm (TCA) is proposed. Firstly, the fuzzy set theory and the transitive closure algorithm are introduced. Then four different factors are selected to establish the flight classification model and a method is given to calculate the delay cost for each class. Finally, the proposed method is implemented in the sequencing problems of flights in a terminal area, and results are compared with that of the traditional classification method(TCM). Results show that the new classification model is effective in reducing the expenses of flight delays, thus optimizing the sequences of arrival and departure flights, and improving the efficiency of air traffic control.展开更多
Aiming at the real-time fluctuation and nonlinear characteristics of the expressway short-term traffic flow forecasting the parameter projection pursuit regression PPPR model is applied to forecast the expressway traf...Aiming at the real-time fluctuation and nonlinear characteristics of the expressway short-term traffic flow forecasting the parameter projection pursuit regression PPPR model is applied to forecast the expressway traffic flow where the orthogonal Hermite polynomial is used to fit the ridge functions and the least square method is employed to determine the polynomial weight coefficient c.In order to efficiently optimize the projection direction a and the number M of ridge functions of the PPPR model the chaos cloud particle swarm optimization CCPSO algorithm is applied to optimize the parameters. The CCPSO-PPPR hybrid optimization model for expressway short-term traffic flow forecasting is established in which the CCPSO algorithm is used to optimize the optimal projection direction a in the inner layer while the number M of ridge functions is optimized in the outer layer.Traffic volume weather factors and travel date of the previous several time intervals of the road section are taken as the input influencing factors. Example forecasting and model comparison results indicate that the proposed model can obtain a better forecasting effect and its absolute error is controlled within [-6,6] which can meet the application requirements of expressway traffic flow forecasting.展开更多
In order to decrease model complexity of rice panicle for its complicated morphological structure,an interactive L-system based on substructure algorithm was proposed to model rice panicle in this study.Through the an...In order to decrease model complexity of rice panicle for its complicated morphological structure,an interactive L-system based on substructure algorithm was proposed to model rice panicle in this study.Through the analysis of panicle morphology,the geometrical structure models of panicle spikelet,axis and branch were constructed firstly.Based on that,an interactive panicle L-system model was developed by using substructure algorithm to optimize panicle geometrical models with the similar structure.Simulation results showed that the interactive L-system panicle model based on substructure algorithm could fast construct panicle morphological structure in reality.In addition,this method had the well reference value for other plants model research.展开更多
To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm ...To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm based on the Oracle spatial data model is proposed. The algorithm uses the Oracle road network data model to analyze the spatial relationships between massive GPS positioning points and freeway networks, builds an N-shortest path algorithm to find reasonable candidate routes between GPS positioning points efficiently, and uses the fuzzy logic inference system to determine the final matched traveling route. According to the implementation with field data from Los Angeles, the computation speed of the algorithm is about 135 GPS positioning points per second and the accuracy is 98.9%. The results demonstrate the effectiveness and accuracy of the proposed algorithm for mapping massive GPS positioning data onto freeway networks with complex geometric characteristics.展开更多
基金supported by the National Natural Science Foundation of China[grant number 51775020]the Science Challenge Project[grant number.TZ2018007]+2 种基金the National Natural Science Foundation of China[grant number 62073009]the Postdoctoral Fellowship Program of CPSF[grant number GZC20233365]the Fundamental Research Funds for Central Universities[grant number JKF-20240559].
文摘Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,the robustness of solutions obtained by these methods faces great challenges when the complexity of the PV model increases.The unstable results will affect the reliable operation and maintenance strategies of PV systems.In response to this challenge,an improved rime optimization algorithm with enhanced exploration and exploitation,termed TERIME,is proposed for robust and accurate parameter identification for various PV models.Specifically,the differential evolution mutation operator is integrated in the exploration phase to enhance the population diversity.Meanwhile,a new exploitation strategy incorporating randomization and neighborhood strategies simultaneously is developed to maintain the balance of exploitation width and depth.The TERIME algorithm is applied to estimate the optimal parameters of the single diode model,double diode model,and triple diode model combined with the Lambert-W function for three PV cell and module types including RTC France,Photo Watt-PWP 201 and S75.According to the statistical analysis in 100 runs,the proposed algorithm achieves more accurate and robust parameter estimations than other techniques to various PV models in varying environmental conditions.All of our source codes are publicly available at https://github.com/dirge1/TERIME.
基金financial support from the National Key R&D Program of China(Grant No.2020YFB1711100).
文摘To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is placed on the critical components of material and heat balance.Through a thorough analysis of the interactions between various components and energy consumptions,six pivotal factors have been identified—raw material composition,steel type,steel temperature,slag temperature,recycling practices,and operational parameters.Utilizing a framework based on an equivalent energy consumption model,an integrated intelligent diagnostic model has been developed that encapsulates these factors,providing a comprehensive assessment tool for converter energy consumption.Employing the K-means clustering algorithm,historical operational data from the converter have been meticulously analyzed to determine baseline values for essential variables such as energy consumption and recovery rates.Building upon this data-driven foundation,an innovative online system for the intelligent diagnosis of converter energy consumption has been crafted and implemented,enhancing the precision and efficiency of energy management.Upon implementation with energy consumption data at a steel plant in 2023,the diagnostic analysis performed by the system exposed significant variations in energy usage across different converter units.The analysis revealed that the most significant factor influencing the variation in energy consumption for both furnaces was the steel grade,with contributions of−0.550 and 0.379.
基金funded by Anhui NARI ZT Electric Co.,Ltd.,entitled“Research on the Shared Operation and Maintenance Service Model for Metering Equipment and Platform Development for the Modern Industrial Chain”(Grant No.524636250005).
文摘With the rapid adoption of artificial intelligence(AI)in domains such as power,transportation,and finance,the number of machine learning and deep learning models has grown exponentially.However,challenges such as delayed retraining,inconsistent version management,insufficient drift monitoring,and limited data security still hinder efficient and reliable model operations.To address these issues,this paper proposes the Intelligent Model Lifecycle Management Algorithm(IMLMA).The algorithm employs a dual-trigger mechanism based on both data volume thresholds and time intervals to automate retraining,and applies Bayesian optimization for adaptive hyperparameter tuning to improve performance.A multi-metric replacement strategy,incorporating MSE,MAE,and R2,ensures that new models replace existing ones only when performance improvements are guaranteed.A versioning and traceability database supports comparison and visualization,while real-time monitoring with stability analysis enables early warnings of latency and drift.Finally,hash-based integrity checks secure both model files and datasets.Experimental validation in a power metering operation scenario demonstrates that IMLMA reduces model update delays,enhances predictive accuracy and stability,and maintains low latency under high concurrency.This work provides a practical,reusable,and scalable solution for intelligent model lifecycle management,with broad applicability to complex systems such as smart grids.
文摘The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of geostatistical analysis,particularly in mineral exploration.The study combines GA and machine learning to optimise variogram parameters,including range,sill,and nugget,by minimising the root mean square error(RMSE)and maximising the coefficient of determination(R^(2)).The experimental variograms were computed and modelled using theoretical models,followed by optimisation via evolutionary algorithms.The method was applied to gravity data from the Ngoura-Batouri-Kette mining district in Eastern Cameroon,covering 141 data points.Sequential Gaussian Simulations(SGS)were employed for predictive mapping to validate simulated results against true values.Key findings show variograms with ranges between 24.71 km and 49.77 km,opti-mised RMSE and R^(2) values of 11.21 mGal^(2) and 0.969,respectively,after 42 generations of GA optimisation.Predictive mapping using SGS demonstrated that simulated values closely matched true values,with the simu-lated mean at 21.75 mGal compared to the true mean of 25.16 mGal,and variances of 465.70 mGal^(2) and 555.28 mGal^(2),respectively.The results confirmed spatial variability and anisotropies in the N170-N210 directions,consistent with prior studies.This work presents a novel integration of GA and machine learning for variogram modelling,offering an automated,efficient approach to parameter estimation.The methodology significantly enhances predictive geostatistical models,contributing to the advancement of mineral exploration and improving the precision and speed of decision-making in the petroleum and mining industries.
基金supported by the National Natural Science Foundation of China(Grant Nos.62303197,62273214)the Natural Science Foundation of Shandong Province(ZR2024MFO18).
文摘Accurate and reliable photovoltaic(PV)modeling is crucial for the performance evaluation,control,and optimization of PV systems.However,existing methods for PV parameter identification often suffer from limitations in accuracy and efficiency.To address these challenges,we propose an adaptive multi-learning cooperation search algorithm(AMLCSA)for efficient identification of unknown parameters in PV models.AMLCSA is a novel algorithm inspired by teamwork behaviors in modern enterprises.It enhances the original cooperation search algorithm in two key aspects:(i)an adaptive multi-learning strategy that dynamically adjusts search ranges using adaptive weights,allowing better individuals to focus on local exploitation while guiding poorer individuals toward global exploration;and(ii)a chaotic grouping reflection strategy that introduces chaotic sequences to enhance population diversity and improve search performance.The effectiveness of AMLCSA is demonstrated on single-diode,double-diode,and three PV-module models.Simulation results show that AMLCSA offers significant advantages in convergence,accuracy,and stability compared to existing state-of-the-art algorithms.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group under grant number(GRP.2/663/46).
文摘Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.
基金supported by the fundings from 2024 Young Talents Program for Science and Technology Thinking Tanks(No.XMSB20240711041)2024 Student Research Program on Dynamic Simulation and Force-on-Force Exercise of Nuclear Security in 3D Interactive Environment Using Reinforcement Learning,Natural Science Foundation of Top Talent of SZTU(No.GDRC202407)+2 种基金Shenzhen Science and Technology Program(No.KCXFZ20240903092603005)Shenzhen Science and Technology Program(No.JCYJ20241202124703004)Shenzhen Science and Technology Program(No.KJZD20230923114117032)。
文摘Vulnerability assessment is a systematic process to identify security gaps in the design and evaluation of physical protection systems.Adversarial path planning is a widely used method for identifying potential vulnerabilities and threats to the security and resilience of critical infrastructures.However,achieving efficient path optimization in complex large-scale three-dimensional(3D)scenes remains a significant challenge for vulnerability assessment.This paper introduces a novel A^(*)-algorithmic framework for 3D security modeling and vulnerability assessment.Within this framework,the 3D facility models were first developed in 3ds Max and then incorporated into Unity for A^(*)heuristic pathfinding.The A^(*)-heuristic pathfinding algorithm was implemented with a geometric probability model to refine the detection and distance fields and achieve a rational approximation of the cost to reach the goal.An admissible heuristic is ensured by incorporating the minimum probability of detection(P_(D)^(min))and diagonal distance to estimate the heuristic function.The 3D A^(*)heuristic search was demonstrated using a hypothetical laboratory facility,where a comparison was also carried out between the A^(*)and Dijkstra algorithms for optimal path identification.Comparative results indicate that the proposed A^(*)-heuristic algorithm effectively identifies the most vulnerable adversarial pathfinding with high efficiency.Finally,the paper discusses hidden phenomena and open issues in efficient 3D pathfinding for security applications.
基金supported by the National Key Research and Development Program of China(2023YFB3307801)the National Natural Science Foundation of China(62394343,62373155,62073142)+3 种基金Major Science and Technology Project of Xinjiang(No.2022A01006-4)the Programme of Introducing Talents of Discipline to Universities(the 111 Project)under Grant B17017the Fundamental Research Funds for the Central Universities,Science Foundation of China University of Petroleum,Beijing(No.2462024YJRC011)the Open Research Project of the State Key Laboratory of Industrial Control Technology,China(Grant No.ICT2024B70).
文摘The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.
基金the Chinese Clinical Trial Registry(No.ChiCTR2000040109)approved by the Hospital Ethics Committee(No.20210130017).
文摘BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intraoperative strategies.AIM To evaluate the predictive performance of machine learning(ML)algorithms for DCI by comparing three modeling approaches,identify factors influencing DCI,and develop a preoperative prediction model using ML algorithms to enhance colonoscopy quality and efficiency.METHODS This cross-sectional study enrolled 712 patients who underwent colonoscopy at a tertiary hospital between June 2020 and May 2021.Demographic data,past medical history,medication use,and psychological status were collected.The endoscopist assessed DCI using the visual analogue scale.After univariate screening,predictive models were developed using multivariable logistic regression,least absolute shrinkage and selection operator(LASSO)regression,and random forest(RF)algorithms.Model performance was evaluated based on discrimination,calibration,and decision curve analysis(DCA),and results were visualized using nomograms.RESULTS A total of 712 patients(53.8%male;mean age 54.5 years±12.9 years)were included.Logistic regression analysis identified constipation[odds ratio(OR)=2.254,95%confidence interval(CI):1.289-3.931],abdominal circumference(AC)(77.5–91.9 cm,OR=1.895,95%CI:1.065-3.350;AC≥92 cm,OR=1.271,95%CI:0.730-2.188),and anxiety(OR=1.071,95%CI:1.044-1.100)as predictive factors for DCI,validated by LASSO and RF methods.Model performance revealed training/validation sensitivities of 0.826/0.925,0.924/0.868,and 1.000/0.981;specificities of 0.602/0.511,0.510/0.562,and 0.977/0.526;and corresponding area under the receiver operating characteristic curves(AUCs)of 0.780(0.737-0.823)/0.726(0.654-0.799),0.754(0.710-0.798)/0.723(0.656-0.791),and 1.000(1.000-1.000)/0.754(0.688-0.820),respectively.DCA indicated optimal net benefit within probability thresholds of 0-0.9 and 0.05-0.37.The RF model demonstrated superior diagnostic accuracy,reflected by perfect training sensitivity(1.000)and highest validation AUC(0.754),outperforming other methods in clinical applicability.CONCLUSION The RF-based model exhibited superior predictive accuracy for DCI compared to multivariable logistic and LASSO regression models.This approach supports individualized preoperative optimization,enhancing colonoscopy quality through targeted risk stratification.
基金the financial support from the National Key R&D Program of China(Grant No.2021YFC3001003)Science and Technology Development Fund,Macao SAR(File No.0056/2023/RIB2)Guangdong Provincial Department of Science and Technology(Grant No.2022A0505030019).
文摘Challenges in stratigraphic modeling arise from underground uncertainty.While borehole exploration is reliable,it remains sparse due to economic and site constraints.Electrical resistivity tomography(ERT)as a cost-effective geophysical technique can acquire high-density data;however,uncertainty and nonuniqueness inherent in ERT impede its usage for stratigraphy identification.This paper integrates ERT and onsite observations for the first time to propose a novel method for characterizing stratigraphic profiles.The method consists of two steps:(1)ERT for prior knowledge:ERT data are processed by soft clustering using the Gaussian mixture model,followed by probability smoothing to quantify its depthdependent uncertainty;and(2)Observations for calibration:a spatial sequential Bayesian updating(SSBU)algorithm is developed to update the prior knowledge based on likelihoods derived from onsite observations,namely topsoil and boreholes.The effectiveness of the proposed method is validated through its application to a real slope site in Foshan,China.Comparative analysis with advanced borehole-driven methods highlights the superiority of incorporating ERT data in stratigraphic modeling,in terms of prediction accuracy at borehole locations and sensitivity to borehole data.Informed by ERT,reduced sensitivity to boreholes provides a fundamental solution to the longstanding challenge of sparse measurements.The paper further discusses the impact of ERT uncertainty on the proposed model using time-lapse measurements,the impact of model resolution,and applicability in engineering projects.This study,as a breakthrough in stratigraphic modeling,bridges gaps in combining geophysical and geotechnical data to address measurement sparsity and paves the way for more economical geotechnical exploration.
基金supported in part by Sichuan Science and Technology Program under Grant No.2025ZNSFSC151in part by the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No.XDA27030201+1 种基金the Natural Science Foundation of China under Grant No.U21B6001in part by the Natural Science Foundation of Tianjin under Grant No.24JCQNJC01930.
文摘The work proposes a distributed Kalman filtering(KF)algorithm to track a time-varying unknown signal process for a stochastic regression model over network systems in a cooperative way.We provide the stability analysis of the proposed distributed KF algorithm without independent and stationary signal assumptions,which implies that the theoretical results are able to be applied to stochastic feedback systems.Note that the main difficulty of stability analysis lies in analyzing the properties of the product of non-independent and non-stationary random matrices involved in the error equation.We employ analysis techniques such as stochastic Lyapunov function,stability theory of stochastic systems,and algebraic graph theory to deal with the above issue.The stochastic spatio-temporal cooperative information condition shows the cooperative property of multiple sensors that even though any local sensor cannot track the time-varying unknown signal,the distributed KF algorithm can be utilized to finish the filtering task in a cooperative way.At last,we illustrate the property of the proposed distributed KF algorithm by a simulation example.
基金National Natural Science Foundation of China (No. 49894194-4)
文摘Gas-bearing volcanic reservoirs have been found in the deep Songliao Basin, China. Choosing proper interpretation parameters for log evaluation is difficult due to complicated mineral compositions and variable mineral contents. Based on the QAPF classification scheme given by IUGS, we propose a method to determine the mineral contents of volcanic rocks using log data and a genetic algorithm. According to the QAPF scheme, minerals in volcanic rocks are divided into five groups: Q(quartz), A (Alkaline feldspar), P (plagioclase), M (mafic) and F (feldspathoid). We propose a model called QAPM including porosity for the volumetric analysis of reservoirs. The log response equations for density, apparent neutron porosity, transit time, gamma ray and volume photoelectrical cross section index were first established with the mineral parameters obtained from the Schlumberger handbook of log mineral parameters. Then the volumes of the four minerals in the matrix were calculated using the genetic algorithm (GA). The calculated porosity, based on the interpretation parameters, can be compared with core porosity, and the rock names given in the paper based on QAPF classification according to the four mineral contents are compatible with those from the chemical analysis of the core samples.
基金The National Natural Science Foundation of China (No.50422283)the Science and Technology Key Plan Project of Henan Province (No.072102360060)
文摘In order to solve the problems of potential incident rescue on expressway networks, the opportunity cost-based method is used to establish a resource dispatch decision model. The model aims to dispatch the rescue resources from the regional road networks and to obtain the location of the rescue depots and the numbers of service vehicles assigned for the potential incidents. Due to the computational complexity of the decision model, a scene decomposition algorithm is proposed. The algorithm decomposes the dispatch problem from various kinds of resources to a single resource, and determines the original scene of rescue resources based on the rescue requirements and the resource matrix. Finally, a convenient optimal dispatch scheme is obtained by decomposing each original scene and simplifying the objective function. To illustrate the application of the decision model and the algorithm, a case of the expressway network is studied on areas around Nanjing city in China and the results show that the model used and the algorithm proposed are appropriate.
基金The National Key Technology R&D Program of China during the 11th Five Year Plan Period(No.2008BAJ11B01)
文摘A solution to compute the optimal path based on a single-line-single-directional(SLSD)road network model is proposed.Unlike the traditional road network model,in the SLSD conceptual model,being single-directional and single-line style,a road is no longer a linkage of road nodes but abstracted as a network node.Similarly,a road node is abstracted as the linkage of two ordered single-directional roads.This model can describe turn restrictions,circular roads,and other real scenarios usually described using a super-graph.Then a computing framework for optimal path finding(OPF)is presented.It is proved that classical Dijkstra and A algorithms can be directly used for OPF computing of any real-world road networks by transferring a super-graph to an SLSD network.Finally,using Singapore road network data,the proposed conceptual model and its corresponding optimal path finding algorithms are validated using a two-step optimal path finding algorithm with a pre-computing strategy based on the SLSD road network.
文摘Current dynamic finite element model updating methods are not efficient or restricted to the problem of local optima. To circumvent these, a novel updating method which integrates the meta-model and the genetic algorithm is proposed. Experimental design technique is used to determine the best sampling points for the estimation of polynomial coefficients given the order and the number of independent variables. Finite element analyses are performed to generate the sampling data. Regression analysis is then used to estimate the response surface model to approximate the functional relationship between response features and design parameters on the entire design space. In the fitness evaluation of the genetic algorithm, the response surface model is used to substitute the finite element model to output features with given design parameters for the computation of fitness for the individual. Finally, the global optima that corresponds to the updated design parameter is acquired after several generations of evolution. In the application example, finite element analysis and modal testing are performed on a real chassis model. The finite element model is updated using the proposed method. After updating, root-mean-square error of modal frequencies is smaller than 2%. Furthermore, prediction ability of the updated model is validated using the testing results of the modified structure. The root-mean-square error of the prediction errors is smaller than 2%.
基金Hexa-Type Elites Peak Program of Jiangsu Province(No.2008144)Qing Lan Project of Jiangsu ProvinceFund for Excellent Young Teachers of Southeast University
文摘To realize automatic modeling and dynamic simulation of the educational assembling-type robot with open structure, a general dynamic model for the educational assembling-type robot and a fast simulation algorithm are put forward. First, the educational robot system is abstracted to a multibody system and a general dynamic model of the educational robot is constructed by the Newton-Euler method. Then the dynamic model is simplified by a combination of components with fixed connections according to the structural characteristics of the educational robot. Secondly, in order to obtain a high efficiency simulation algorithm, based on the sparse matrix technique, the augmentation algorithm and the direct projective constraint stabilization algorithm are improved. Finally, a numerical example is given. The results show that the model and the fast algorithm are valid and effective. This study lays a dynamic foundation for realizing the simulation platform of the educational robot.
文摘A new arrival and departure flight classification method based on the transitive closure algorithm (TCA) is proposed. Firstly, the fuzzy set theory and the transitive closure algorithm are introduced. Then four different factors are selected to establish the flight classification model and a method is given to calculate the delay cost for each class. Finally, the proposed method is implemented in the sequencing problems of flights in a terminal area, and results are compared with that of the traditional classification method(TCM). Results show that the new classification model is effective in reducing the expenses of flight delays, thus optimizing the sequences of arrival and departure flights, and improving the efficiency of air traffic control.
基金The National Natural Science Foundation of China(No.71101014,50679008)Specialized Research Fund for the Doctoral Program of Higher Education(No.200801411105)the Science and Technology Project of the Department of Communications of Henan Province(No.2010D107-4)
文摘Aiming at the real-time fluctuation and nonlinear characteristics of the expressway short-term traffic flow forecasting the parameter projection pursuit regression PPPR model is applied to forecast the expressway traffic flow where the orthogonal Hermite polynomial is used to fit the ridge functions and the least square method is employed to determine the polynomial weight coefficient c.In order to efficiently optimize the projection direction a and the number M of ridge functions of the PPPR model the chaos cloud particle swarm optimization CCPSO algorithm is applied to optimize the parameters. The CCPSO-PPPR hybrid optimization model for expressway short-term traffic flow forecasting is established in which the CCPSO algorithm is used to optimize the optimal projection direction a in the inner layer while the number M of ridge functions is optimized in the outer layer.Traffic volume weather factors and travel date of the previous several time intervals of the road section are taken as the input influencing factors. Example forecasting and model comparison results indicate that the proposed model can obtain a better forecasting effect and its absolute error is controlled within [-6,6] which can meet the application requirements of expressway traffic flow forecasting.
基金Supported by National Natural Science Foundation of China(60802040)Youth Fund in Southwest University of Science and Technology(10zx3106)~~
文摘In order to decrease model complexity of rice panicle for its complicated morphological structure,an interactive L-system based on substructure algorithm was proposed to model rice panicle in this study.Through the analysis of panicle morphology,the geometrical structure models of panicle spikelet,axis and branch were constructed firstly.Based on that,an interactive panicle L-system model was developed by using substructure algorithm to optimize panicle geometrical models with the similar structure.Simulation results showed that the interactive L-system panicle model based on substructure algorithm could fast construct panicle morphological structure in reality.In addition,this method had the well reference value for other plants model research.
文摘To improve the performance of the traditional map matching algorithms in freeway traffic state monitoring systems using the low logging frequency GPS (global positioning system) probe data, a map matching algorithm based on the Oracle spatial data model is proposed. The algorithm uses the Oracle road network data model to analyze the spatial relationships between massive GPS positioning points and freeway networks, builds an N-shortest path algorithm to find reasonable candidate routes between GPS positioning points efficiently, and uses the fuzzy logic inference system to determine the final matched traveling route. According to the implementation with field data from Los Angeles, the computation speed of the algorithm is about 135 GPS positioning points per second and the accuracy is 98.9%. The results demonstrate the effectiveness and accuracy of the proposed algorithm for mapping massive GPS positioning data onto freeway networks with complex geometric characteristics.