Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,...Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,the robustness of solutions obtained by these methods faces great challenges when the complexity of the PV model increases.The unstable results will affect the reliable operation and maintenance strategies of PV systems.In response to this challenge,an improved rime optimization algorithm with enhanced exploration and exploitation,termed TERIME,is proposed for robust and accurate parameter identification for various PV models.Specifically,the differential evolution mutation operator is integrated in the exploration phase to enhance the population diversity.Meanwhile,a new exploitation strategy incorporating randomization and neighborhood strategies simultaneously is developed to maintain the balance of exploitation width and depth.The TERIME algorithm is applied to estimate the optimal parameters of the single diode model,double diode model,and triple diode model combined with the Lambert-W function for three PV cell and module types including RTC France,Photo Watt-PWP 201 and S75.According to the statistical analysis in 100 runs,the proposed algorithm achieves more accurate and robust parameter estimations than other techniques to various PV models in varying environmental conditions.All of our source codes are publicly available at https://github.com/dirge1/TERIME.展开更多
To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is pla...To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is placed on the critical components of material and heat balance.Through a thorough analysis of the interactions between various components and energy consumptions,six pivotal factors have been identified—raw material composition,steel type,steel temperature,slag temperature,recycling practices,and operational parameters.Utilizing a framework based on an equivalent energy consumption model,an integrated intelligent diagnostic model has been developed that encapsulates these factors,providing a comprehensive assessment tool for converter energy consumption.Employing the K-means clustering algorithm,historical operational data from the converter have been meticulously analyzed to determine baseline values for essential variables such as energy consumption and recovery rates.Building upon this data-driven foundation,an innovative online system for the intelligent diagnosis of converter energy consumption has been crafted and implemented,enhancing the precision and efficiency of energy management.Upon implementation with energy consumption data at a steel plant in 2023,the diagnostic analysis performed by the system exposed significant variations in energy usage across different converter units.The analysis revealed that the most significant factor influencing the variation in energy consumption for both furnaces was the steel grade,with contributions of−0.550 and 0.379.展开更多
With the rapid adoption of artificial intelligence(AI)in domains such as power,transportation,and finance,the number of machine learning and deep learning models has grown exponentially.However,challenges such as dela...With the rapid adoption of artificial intelligence(AI)in domains such as power,transportation,and finance,the number of machine learning and deep learning models has grown exponentially.However,challenges such as delayed retraining,inconsistent version management,insufficient drift monitoring,and limited data security still hinder efficient and reliable model operations.To address these issues,this paper proposes the Intelligent Model Lifecycle Management Algorithm(IMLMA).The algorithm employs a dual-trigger mechanism based on both data volume thresholds and time intervals to automate retraining,and applies Bayesian optimization for adaptive hyperparameter tuning to improve performance.A multi-metric replacement strategy,incorporating MSE,MAE,and R2,ensures that new models replace existing ones only when performance improvements are guaranteed.A versioning and traceability database supports comparison and visualization,while real-time monitoring with stability analysis enables early warnings of latency and drift.Finally,hash-based integrity checks secure both model files and datasets.Experimental validation in a power metering operation scenario demonstrates that IMLMA reduces model update delays,enhances predictive accuracy and stability,and maintains low latency under high concurrency.This work provides a practical,reusable,and scalable solution for intelligent model lifecycle management,with broad applicability to complex systems such as smart grids.展开更多
The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of...The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of geostatistical analysis,particularly in mineral exploration.The study combines GA and machine learning to optimise variogram parameters,including range,sill,and nugget,by minimising the root mean square error(RMSE)and maximising the coefficient of determination(R^(2)).The experimental variograms were computed and modelled using theoretical models,followed by optimisation via evolutionary algorithms.The method was applied to gravity data from the Ngoura-Batouri-Kette mining district in Eastern Cameroon,covering 141 data points.Sequential Gaussian Simulations(SGS)were employed for predictive mapping to validate simulated results against true values.Key findings show variograms with ranges between 24.71 km and 49.77 km,opti-mised RMSE and R^(2) values of 11.21 mGal^(2) and 0.969,respectively,after 42 generations of GA optimisation.Predictive mapping using SGS demonstrated that simulated values closely matched true values,with the simu-lated mean at 21.75 mGal compared to the true mean of 25.16 mGal,and variances of 465.70 mGal^(2) and 555.28 mGal^(2),respectively.The results confirmed spatial variability and anisotropies in the N170-N210 directions,consistent with prior studies.This work presents a novel integration of GA and machine learning for variogram modelling,offering an automated,efficient approach to parameter estimation.The methodology significantly enhances predictive geostatistical models,contributing to the advancement of mineral exploration and improving the precision and speed of decision-making in the petroleum and mining industries.展开更多
Accurate and reliable photovoltaic(PV)modeling is crucial for the performance evaluation,control,and optimization of PV systems.However,existing methods for PV parameter identification often suffer from limitations in...Accurate and reliable photovoltaic(PV)modeling is crucial for the performance evaluation,control,and optimization of PV systems.However,existing methods for PV parameter identification often suffer from limitations in accuracy and efficiency.To address these challenges,we propose an adaptive multi-learning cooperation search algorithm(AMLCSA)for efficient identification of unknown parameters in PV models.AMLCSA is a novel algorithm inspired by teamwork behaviors in modern enterprises.It enhances the original cooperation search algorithm in two key aspects:(i)an adaptive multi-learning strategy that dynamically adjusts search ranges using adaptive weights,allowing better individuals to focus on local exploitation while guiding poorer individuals toward global exploration;and(ii)a chaotic grouping reflection strategy that introduces chaotic sequences to enhance population diversity and improve search performance.The effectiveness of AMLCSA is demonstrated on single-diode,double-diode,and three PV-module models.Simulation results show that AMLCSA offers significant advantages in convergence,accuracy,and stability compared to existing state-of-the-art algorithms.展开更多
Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection me...Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.展开更多
Vulnerability assessment is a systematic process to identify security gaps in the design and evaluation of physical protection systems.Adversarial path planning is a widely used method for identifying potential vulner...Vulnerability assessment is a systematic process to identify security gaps in the design and evaluation of physical protection systems.Adversarial path planning is a widely used method for identifying potential vulnerabilities and threats to the security and resilience of critical infrastructures.However,achieving efficient path optimization in complex large-scale three-dimensional(3D)scenes remains a significant challenge for vulnerability assessment.This paper introduces a novel A^(*)-algorithmic framework for 3D security modeling and vulnerability assessment.Within this framework,the 3D facility models were first developed in 3ds Max and then incorporated into Unity for A^(*)heuristic pathfinding.The A^(*)-heuristic pathfinding algorithm was implemented with a geometric probability model to refine the detection and distance fields and achieve a rational approximation of the cost to reach the goal.An admissible heuristic is ensured by incorporating the minimum probability of detection(P_(D)^(min))and diagonal distance to estimate the heuristic function.The 3D A^(*)heuristic search was demonstrated using a hypothetical laboratory facility,where a comparison was also carried out between the A^(*)and Dijkstra algorithms for optimal path identification.Comparative results indicate that the proposed A^(*)-heuristic algorithm effectively identifies the most vulnerable adversarial pathfinding with high efficiency.Finally,the paper discusses hidden phenomena and open issues in efficient 3D pathfinding for security applications.展开更多
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie...The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.展开更多
BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intr...BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intraoperative strategies.AIM To evaluate the predictive performance of machine learning(ML)algorithms for DCI by comparing three modeling approaches,identify factors influencing DCI,and develop a preoperative prediction model using ML algorithms to enhance colonoscopy quality and efficiency.METHODS This cross-sectional study enrolled 712 patients who underwent colonoscopy at a tertiary hospital between June 2020 and May 2021.Demographic data,past medical history,medication use,and psychological status were collected.The endoscopist assessed DCI using the visual analogue scale.After univariate screening,predictive models were developed using multivariable logistic regression,least absolute shrinkage and selection operator(LASSO)regression,and random forest(RF)algorithms.Model performance was evaluated based on discrimination,calibration,and decision curve analysis(DCA),and results were visualized using nomograms.RESULTS A total of 712 patients(53.8%male;mean age 54.5 years±12.9 years)were included.Logistic regression analysis identified constipation[odds ratio(OR)=2.254,95%confidence interval(CI):1.289-3.931],abdominal circumference(AC)(77.5–91.9 cm,OR=1.895,95%CI:1.065-3.350;AC≥92 cm,OR=1.271,95%CI:0.730-2.188),and anxiety(OR=1.071,95%CI:1.044-1.100)as predictive factors for DCI,validated by LASSO and RF methods.Model performance revealed training/validation sensitivities of 0.826/0.925,0.924/0.868,and 1.000/0.981;specificities of 0.602/0.511,0.510/0.562,and 0.977/0.526;and corresponding area under the receiver operating characteristic curves(AUCs)of 0.780(0.737-0.823)/0.726(0.654-0.799),0.754(0.710-0.798)/0.723(0.656-0.791),and 1.000(1.000-1.000)/0.754(0.688-0.820),respectively.DCA indicated optimal net benefit within probability thresholds of 0-0.9 and 0.05-0.37.The RF model demonstrated superior diagnostic accuracy,reflected by perfect training sensitivity(1.000)and highest validation AUC(0.754),outperforming other methods in clinical applicability.CONCLUSION The RF-based model exhibited superior predictive accuracy for DCI compared to multivariable logistic and LASSO regression models.This approach supports individualized preoperative optimization,enhancing colonoscopy quality through targeted risk stratification.展开更多
Challenges in stratigraphic modeling arise from underground uncertainty.While borehole exploration is reliable,it remains sparse due to economic and site constraints.Electrical resistivity tomography(ERT)as a cost-eff...Challenges in stratigraphic modeling arise from underground uncertainty.While borehole exploration is reliable,it remains sparse due to economic and site constraints.Electrical resistivity tomography(ERT)as a cost-effective geophysical technique can acquire high-density data;however,uncertainty and nonuniqueness inherent in ERT impede its usage for stratigraphy identification.This paper integrates ERT and onsite observations for the first time to propose a novel method for characterizing stratigraphic profiles.The method consists of two steps:(1)ERT for prior knowledge:ERT data are processed by soft clustering using the Gaussian mixture model,followed by probability smoothing to quantify its depthdependent uncertainty;and(2)Observations for calibration:a spatial sequential Bayesian updating(SSBU)algorithm is developed to update the prior knowledge based on likelihoods derived from onsite observations,namely topsoil and boreholes.The effectiveness of the proposed method is validated through its application to a real slope site in Foshan,China.Comparative analysis with advanced borehole-driven methods highlights the superiority of incorporating ERT data in stratigraphic modeling,in terms of prediction accuracy at borehole locations and sensitivity to borehole data.Informed by ERT,reduced sensitivity to boreholes provides a fundamental solution to the longstanding challenge of sparse measurements.The paper further discusses the impact of ERT uncertainty on the proposed model using time-lapse measurements,the impact of model resolution,and applicability in engineering projects.This study,as a breakthrough in stratigraphic modeling,bridges gaps in combining geophysical and geotechnical data to address measurement sparsity and paves the way for more economical geotechnical exploration.展开更多
The work proposes a distributed Kalman filtering(KF)algorithm to track a time-varying unknown signal process for a stochastic regression model over network systems in a cooperative way.We provide the stability analysi...The work proposes a distributed Kalman filtering(KF)algorithm to track a time-varying unknown signal process for a stochastic regression model over network systems in a cooperative way.We provide the stability analysis of the proposed distributed KF algorithm without independent and stationary signal assumptions,which implies that the theoretical results are able to be applied to stochastic feedback systems.Note that the main difficulty of stability analysis lies in analyzing the properties of the product of non-independent and non-stationary random matrices involved in the error equation.We employ analysis techniques such as stochastic Lyapunov function,stability theory of stochastic systems,and algebraic graph theory to deal with the above issue.The stochastic spatio-temporal cooperative information condition shows the cooperative property of multiple sensors that even though any local sensor cannot track the time-varying unknown signal,the distributed KF algorithm can be utilized to finish the filtering task in a cooperative way.At last,we illustrate the property of the proposed distributed KF algorithm by a simulation example.展开更多
This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platfo...This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platform dimensional parameters in relation to motion responses.Although the three-dimensional potential flow(TDPF)panel method is recognized for its precision in calculating FOWT motion responses,its computational intensity necessitates an alternative approach for efficiency.Herein,a novel application of varying fidelity frequency-domain computational strategies is introduced,which synthesizes the strip theory with the TDPF panel method to strike a balance between computational speed and accuracy.The Co-Kriging algorithm is employed to forge a surrogate model that amalgamates these computational strategies.Optimization objectives are centered on the platform’s motion response in heave and pitch directions under general sea conditions.The steel usage,the range of design variables,and geometric considerations are optimization constraints.The angle of the pontoons,the number of columns,the radius of the central column and the parameters of the mooring lines are optimization constants.This informed the structuring of a multi-objective optimization model utilizing the Non-dominated Sorting Genetic Algorithm Ⅱ(NSGA-Ⅱ)algorithm.For the case of the IEA UMaine VolturnUS-S Reference Platform,Pareto fronts are discerned based on the above framework and delineate the relationship between competing motion response objectives.The efficacy of final designs is substantiated through the time-domain calculation model,which ensures that the motion responses in extreme sea conditions are superior to those of the initial design.展开更多
In the generalized continuum mechanics(GCM)theory framework,asymmetric wave equations encompass the characteristic scale parameters of the medium,accounting for microstructure interactions.This study integrates two th...In the generalized continuum mechanics(GCM)theory framework,asymmetric wave equations encompass the characteristic scale parameters of the medium,accounting for microstructure interactions.This study integrates two theoretical branches of the GCM,the modified couple stress theory(M-CST)and the one-parameter second-strain-gradient theory,to form a novel asymmetric wave equation in a unified framework.Numerical modeling of the asymmetric wave equation in a unified framework accurately describes subsurface structures with vital implications for subsequent seismic wave inversion and imaging endeavors.However,employing finite-difference(FD)methods for numerical modeling may introduce numerical dispersion,adversely affecting the accuracy of numerical modeling.The design of an optimal FD operator is crucial for enhancing the accuracy of numerical modeling and emphasizing the scale effects.Therefore,this study devises a hybrid scheme called the dung beetle optimization(DBO)algorithm with a simulated annealing(SA)algorithm,denoted as the SA-based hybrid DBO(SDBO)algorithm.An FD operator optimization method under the SDBO algorithm was developed and applied to the numerical modeling of asymmetric wave equations in a unified framework.Integrating the DBO and SA algorithms mitigates the risk of convergence to a local extreme.The numerical dispersion outcomes underscore that the proposed SDBO algorithm yields FD operators with precision errors constrained to 0.5‱while encompassing a broader spectrum coverage.This result confirms the efficacy of the SDBO algorithm.Ultimately,the numerical modeling results demonstrate that the new FD method based on the SDBO algorithm effectively suppresses numerical dispersion and enhances the accuracy of elastic wave numerical modeling,thereby accentuating scale effects.This result is significant for extracting wavefield perturbations induced by complex microstructures in the medium and the analysis of scale effects.展开更多
This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective o...This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations.展开更多
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently...Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.展开更多
Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual conne...Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual connections of large-scale fractures.Notably,this model efficiently manages over 20,000 fractures without necessitating adjustments to the DFN geometry.All geometric analyses,such as identifying connected fractures,dividing the two-dimensional domain into closed loops,triangulating arbitrary loops,and refining triangular elements,are fully automated.The analysis processes are comprehensively introduced,and core algorithms,along with their pseudo-codes,are outlined and explained to assist readers in their programming endeavors.The accuracy of geometric analyses is validated through topological graphs representing the connection relationships between fractures.In practical application,the proposed model is employed to assess the water-sealing effectiveness of an underground storage cavern project.The analysis results indicate that the existing design scheme can effectively prevent the stored oil from leaking in the presence of both dense and sparse fractures.Furthermore,following extensive modification and optimization,the scale and precision of model computation suggest that the proposed model and developed codes can meet the requirements of engineering applications.展开更多
A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductiv...A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductivity and effective absorption coefficient of semitransparent materials.For the direct model,the spherical harmonic method and the finite volume method are used to solve the coupled conduction-radiation heat transfer problem in an absorbing,emitting,and non-scattering 2D axisymmetric gray medium in the background of laser flash method.For the identification part,firstly,the temperature field and the incident radiation field in different positions are chosen as observables.Then,a traditional identification model based on PSO algorithm is established.Finally,multilayer ANNs are built to fit and replace the direct model in the traditional identification model to speed up the identification process.The results show that compared with the traditional identification model,the time cost of the hybrid identification model is reduced by about 1 000 times.Besides,the hybrid identification model remains a high level of accuracy even with measurement errors.展开更多
Groundwater inverse modeling is a vital technique for estimating unmeasurable model parameters and enhancing numerical simulation accuracy.This paper comprehensively reviews the current advances and future prospects o...Groundwater inverse modeling is a vital technique for estimating unmeasurable model parameters and enhancing numerical simulation accuracy.This paper comprehensively reviews the current advances and future prospects of metaheuristic algorithm-based groundwater model parameter inversion.Initially,the simulation-optimization parameter estimation framework is introduced,which involves the integration of simulation models with metaheuristic algorithms.The subsequent sections explore the fundamental principles of four widely employed metaheuristic algorithms-genetic algorithm(GA),particle swarm optimization(PSO),simulated annealing(SA),and differential evolution(DE)-highlighting their recent applications in water resources research and related areas.Then,a solute transport model is designed to illustrate how to apply and evaluate these four optimization algorithms in addressing challenges related to model parameter inversion.Finally,three noteworthy directions are presented to address the common challenges among current studies,including balancing the diverse exploration and centralized exploitation within metaheuristic algorithms,local approxi-mate error of the surrogate model,and the curse of dimensionality in spatial variational heterogeneous pa-rameters.In summary,this review paper provides theoretical insights and practical guidance for further advancements in groundwater inverse modeling studies.展开更多
The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making co...The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making collision avoidance significantly more challenging than that for space debris.Much existing research focuses on the continuous thrust model,whereas the impulsive maneuver model is more appropriate for long-duration and long-distance avoidance missions.Additionally,it is important to minimize the impact on the original mission while avoiding noncooperative targets.On the other hand,the existing avoidance algorithms are computationally complex and time-consuming especially with the limited computing capability of the on-board computer,posing challenges for practical engineering applications.To conquer these difficulties,this paper makes the following key contributions:(A)a turn-based(sequential decision-making)limited-area impulsive collision avoidance model considering the time delay of precision orbit determination is established for the first time;(B)a novel Selection Probability Learning Adaptive Search-depth Search Tree(SPL-ASST)algorithm is proposed for non-cooperative target avoidance,which improves the decision-making efficiency by introducing an adaptive-search-depth mechanism and a neural network into the traditional Monte Carlo Tree Search(MCTS).Numerical simulations confirm the effectiveness and efficiency of the proposed method.展开更多
Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear mode...Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.展开更多
基金supported by the National Natural Science Foundation of China[grant number 51775020]the Science Challenge Project[grant number.TZ2018007]+2 种基金the National Natural Science Foundation of China[grant number 62073009]the Postdoctoral Fellowship Program of CPSF[grant number GZC20233365]the Fundamental Research Funds for Central Universities[grant number JKF-20240559].
文摘Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,the robustness of solutions obtained by these methods faces great challenges when the complexity of the PV model increases.The unstable results will affect the reliable operation and maintenance strategies of PV systems.In response to this challenge,an improved rime optimization algorithm with enhanced exploration and exploitation,termed TERIME,is proposed for robust and accurate parameter identification for various PV models.Specifically,the differential evolution mutation operator is integrated in the exploration phase to enhance the population diversity.Meanwhile,a new exploitation strategy incorporating randomization and neighborhood strategies simultaneously is developed to maintain the balance of exploitation width and depth.The TERIME algorithm is applied to estimate the optimal parameters of the single diode model,double diode model,and triple diode model combined with the Lambert-W function for three PV cell and module types including RTC France,Photo Watt-PWP 201 and S75.According to the statistical analysis in 100 runs,the proposed algorithm achieves more accurate and robust parameter estimations than other techniques to various PV models in varying environmental conditions.All of our source codes are publicly available at https://github.com/dirge1/TERIME.
基金financial support from the National Key R&D Program of China(Grant No.2020YFB1711100).
文摘To address the challenge of identifying the primary causes of energy consumption fluctuations and accurately assessing the influence of various factors in the converter unit of an iron and steel plant,the focus is placed on the critical components of material and heat balance.Through a thorough analysis of the interactions between various components and energy consumptions,six pivotal factors have been identified—raw material composition,steel type,steel temperature,slag temperature,recycling practices,and operational parameters.Utilizing a framework based on an equivalent energy consumption model,an integrated intelligent diagnostic model has been developed that encapsulates these factors,providing a comprehensive assessment tool for converter energy consumption.Employing the K-means clustering algorithm,historical operational data from the converter have been meticulously analyzed to determine baseline values for essential variables such as energy consumption and recovery rates.Building upon this data-driven foundation,an innovative online system for the intelligent diagnosis of converter energy consumption has been crafted and implemented,enhancing the precision and efficiency of energy management.Upon implementation with energy consumption data at a steel plant in 2023,the diagnostic analysis performed by the system exposed significant variations in energy usage across different converter units.The analysis revealed that the most significant factor influencing the variation in energy consumption for both furnaces was the steel grade,with contributions of−0.550 and 0.379.
基金funded by Anhui NARI ZT Electric Co.,Ltd.,entitled“Research on the Shared Operation and Maintenance Service Model for Metering Equipment and Platform Development for the Modern Industrial Chain”(Grant No.524636250005).
文摘With the rapid adoption of artificial intelligence(AI)in domains such as power,transportation,and finance,the number of machine learning and deep learning models has grown exponentially.However,challenges such as delayed retraining,inconsistent version management,insufficient drift monitoring,and limited data security still hinder efficient and reliable model operations.To address these issues,this paper proposes the Intelligent Model Lifecycle Management Algorithm(IMLMA).The algorithm employs a dual-trigger mechanism based on both data volume thresholds and time intervals to automate retraining,and applies Bayesian optimization for adaptive hyperparameter tuning to improve performance.A multi-metric replacement strategy,incorporating MSE,MAE,and R2,ensures that new models replace existing ones only when performance improvements are guaranteed.A versioning and traceability database supports comparison and visualization,while real-time monitoring with stability analysis enables early warnings of latency and drift.Finally,hash-based integrity checks secure both model files and datasets.Experimental validation in a power metering operation scenario demonstrates that IMLMA reduces model update delays,enhances predictive accuracy and stability,and maintains low latency under high concurrency.This work provides a practical,reusable,and scalable solution for intelligent model lifecycle management,with broad applicability to complex systems such as smart grids.
文摘The objective of this study is to develop an advanced approach to variogram modelling by integrating genetic algorithms(GA)with machine learning-based linear regression,aiming to improve the accuracy and efficiency of geostatistical analysis,particularly in mineral exploration.The study combines GA and machine learning to optimise variogram parameters,including range,sill,and nugget,by minimising the root mean square error(RMSE)and maximising the coefficient of determination(R^(2)).The experimental variograms were computed and modelled using theoretical models,followed by optimisation via evolutionary algorithms.The method was applied to gravity data from the Ngoura-Batouri-Kette mining district in Eastern Cameroon,covering 141 data points.Sequential Gaussian Simulations(SGS)were employed for predictive mapping to validate simulated results against true values.Key findings show variograms with ranges between 24.71 km and 49.77 km,opti-mised RMSE and R^(2) values of 11.21 mGal^(2) and 0.969,respectively,after 42 generations of GA optimisation.Predictive mapping using SGS demonstrated that simulated values closely matched true values,with the simu-lated mean at 21.75 mGal compared to the true mean of 25.16 mGal,and variances of 465.70 mGal^(2) and 555.28 mGal^(2),respectively.The results confirmed spatial variability and anisotropies in the N170-N210 directions,consistent with prior studies.This work presents a novel integration of GA and machine learning for variogram modelling,offering an automated,efficient approach to parameter estimation.The methodology significantly enhances predictive geostatistical models,contributing to the advancement of mineral exploration and improving the precision and speed of decision-making in the petroleum and mining industries.
基金supported by the National Natural Science Foundation of China(Grant Nos.62303197,62273214)the Natural Science Foundation of Shandong Province(ZR2024MFO18).
文摘Accurate and reliable photovoltaic(PV)modeling is crucial for the performance evaluation,control,and optimization of PV systems.However,existing methods for PV parameter identification often suffer from limitations in accuracy and efficiency.To address these challenges,we propose an adaptive multi-learning cooperation search algorithm(AMLCSA)for efficient identification of unknown parameters in PV models.AMLCSA is a novel algorithm inspired by teamwork behaviors in modern enterprises.It enhances the original cooperation search algorithm in two key aspects:(i)an adaptive multi-learning strategy that dynamically adjusts search ranges using adaptive weights,allowing better individuals to focus on local exploitation while guiding poorer individuals toward global exploration;and(ii)a chaotic grouping reflection strategy that introduces chaotic sequences to enhance population diversity and improve search performance.The effectiveness of AMLCSA is demonstrated on single-diode,double-diode,and three PV-module models.Simulation results show that AMLCSA offers significant advantages in convergence,accuracy,and stability compared to existing state-of-the-art algorithms.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group under grant number(GRP.2/663/46).
文摘Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.
基金supported by the fundings from 2024 Young Talents Program for Science and Technology Thinking Tanks(No.XMSB20240711041)2024 Student Research Program on Dynamic Simulation and Force-on-Force Exercise of Nuclear Security in 3D Interactive Environment Using Reinforcement Learning,Natural Science Foundation of Top Talent of SZTU(No.GDRC202407)+2 种基金Shenzhen Science and Technology Program(No.KCXFZ20240903092603005)Shenzhen Science and Technology Program(No.JCYJ20241202124703004)Shenzhen Science and Technology Program(No.KJZD20230923114117032)。
文摘Vulnerability assessment is a systematic process to identify security gaps in the design and evaluation of physical protection systems.Adversarial path planning is a widely used method for identifying potential vulnerabilities and threats to the security and resilience of critical infrastructures.However,achieving efficient path optimization in complex large-scale three-dimensional(3D)scenes remains a significant challenge for vulnerability assessment.This paper introduces a novel A^(*)-algorithmic framework for 3D security modeling and vulnerability assessment.Within this framework,the 3D facility models were first developed in 3ds Max and then incorporated into Unity for A^(*)heuristic pathfinding.The A^(*)-heuristic pathfinding algorithm was implemented with a geometric probability model to refine the detection and distance fields and achieve a rational approximation of the cost to reach the goal.An admissible heuristic is ensured by incorporating the minimum probability of detection(P_(D)^(min))and diagonal distance to estimate the heuristic function.The 3D A^(*)heuristic search was demonstrated using a hypothetical laboratory facility,where a comparison was also carried out between the A^(*)and Dijkstra algorithms for optimal path identification.Comparative results indicate that the proposed A^(*)-heuristic algorithm effectively identifies the most vulnerable adversarial pathfinding with high efficiency.Finally,the paper discusses hidden phenomena and open issues in efficient 3D pathfinding for security applications.
基金supported by the National Key Research and Development Program of China(2023YFB3307801)the National Natural Science Foundation of China(62394343,62373155,62073142)+3 种基金Major Science and Technology Project of Xinjiang(No.2022A01006-4)the Programme of Introducing Talents of Discipline to Universities(the 111 Project)under Grant B17017the Fundamental Research Funds for the Central Universities,Science Foundation of China University of Petroleum,Beijing(No.2462024YJRC011)the Open Research Project of the State Key Laboratory of Industrial Control Technology,China(Grant No.ICT2024B70).
文摘The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.
基金the Chinese Clinical Trial Registry(No.ChiCTR2000040109)approved by the Hospital Ethics Committee(No.20210130017).
文摘BACKGROUND Difficulty of colonoscopy insertion(DCI)significantly affects colonoscopy effectiveness and serves as a key quality indicator.Predicting and evaluating DCI risk preoperatively is crucial for optimizing intraoperative strategies.AIM To evaluate the predictive performance of machine learning(ML)algorithms for DCI by comparing three modeling approaches,identify factors influencing DCI,and develop a preoperative prediction model using ML algorithms to enhance colonoscopy quality and efficiency.METHODS This cross-sectional study enrolled 712 patients who underwent colonoscopy at a tertiary hospital between June 2020 and May 2021.Demographic data,past medical history,medication use,and psychological status were collected.The endoscopist assessed DCI using the visual analogue scale.After univariate screening,predictive models were developed using multivariable logistic regression,least absolute shrinkage and selection operator(LASSO)regression,and random forest(RF)algorithms.Model performance was evaluated based on discrimination,calibration,and decision curve analysis(DCA),and results were visualized using nomograms.RESULTS A total of 712 patients(53.8%male;mean age 54.5 years±12.9 years)were included.Logistic regression analysis identified constipation[odds ratio(OR)=2.254,95%confidence interval(CI):1.289-3.931],abdominal circumference(AC)(77.5–91.9 cm,OR=1.895,95%CI:1.065-3.350;AC≥92 cm,OR=1.271,95%CI:0.730-2.188),and anxiety(OR=1.071,95%CI:1.044-1.100)as predictive factors for DCI,validated by LASSO and RF methods.Model performance revealed training/validation sensitivities of 0.826/0.925,0.924/0.868,and 1.000/0.981;specificities of 0.602/0.511,0.510/0.562,and 0.977/0.526;and corresponding area under the receiver operating characteristic curves(AUCs)of 0.780(0.737-0.823)/0.726(0.654-0.799),0.754(0.710-0.798)/0.723(0.656-0.791),and 1.000(1.000-1.000)/0.754(0.688-0.820),respectively.DCA indicated optimal net benefit within probability thresholds of 0-0.9 and 0.05-0.37.The RF model demonstrated superior diagnostic accuracy,reflected by perfect training sensitivity(1.000)and highest validation AUC(0.754),outperforming other methods in clinical applicability.CONCLUSION The RF-based model exhibited superior predictive accuracy for DCI compared to multivariable logistic and LASSO regression models.This approach supports individualized preoperative optimization,enhancing colonoscopy quality through targeted risk stratification.
基金the financial support from the National Key R&D Program of China(Grant No.2021YFC3001003)Science and Technology Development Fund,Macao SAR(File No.0056/2023/RIB2)Guangdong Provincial Department of Science and Technology(Grant No.2022A0505030019).
文摘Challenges in stratigraphic modeling arise from underground uncertainty.While borehole exploration is reliable,it remains sparse due to economic and site constraints.Electrical resistivity tomography(ERT)as a cost-effective geophysical technique can acquire high-density data;however,uncertainty and nonuniqueness inherent in ERT impede its usage for stratigraphy identification.This paper integrates ERT and onsite observations for the first time to propose a novel method for characterizing stratigraphic profiles.The method consists of two steps:(1)ERT for prior knowledge:ERT data are processed by soft clustering using the Gaussian mixture model,followed by probability smoothing to quantify its depthdependent uncertainty;and(2)Observations for calibration:a spatial sequential Bayesian updating(SSBU)algorithm is developed to update the prior knowledge based on likelihoods derived from onsite observations,namely topsoil and boreholes.The effectiveness of the proposed method is validated through its application to a real slope site in Foshan,China.Comparative analysis with advanced borehole-driven methods highlights the superiority of incorporating ERT data in stratigraphic modeling,in terms of prediction accuracy at borehole locations and sensitivity to borehole data.Informed by ERT,reduced sensitivity to boreholes provides a fundamental solution to the longstanding challenge of sparse measurements.The paper further discusses the impact of ERT uncertainty on the proposed model using time-lapse measurements,the impact of model resolution,and applicability in engineering projects.This study,as a breakthrough in stratigraphic modeling,bridges gaps in combining geophysical and geotechnical data to address measurement sparsity and paves the way for more economical geotechnical exploration.
基金supported in part by Sichuan Science and Technology Program under Grant No.2025ZNSFSC151in part by the Strategic Priority Research Program of Chinese Academy of Sciences under Grant No.XDA27030201+1 种基金the Natural Science Foundation of China under Grant No.U21B6001in part by the Natural Science Foundation of Tianjin under Grant No.24JCQNJC01930.
文摘The work proposes a distributed Kalman filtering(KF)algorithm to track a time-varying unknown signal process for a stochastic regression model over network systems in a cooperative way.We provide the stability analysis of the proposed distributed KF algorithm without independent and stationary signal assumptions,which implies that the theoretical results are able to be applied to stochastic feedback systems.Note that the main difficulty of stability analysis lies in analyzing the properties of the product of non-independent and non-stationary random matrices involved in the error equation.We employ analysis techniques such as stochastic Lyapunov function,stability theory of stochastic systems,and algebraic graph theory to deal with the above issue.The stochastic spatio-temporal cooperative information condition shows the cooperative property of multiple sensors that even though any local sensor cannot track the time-varying unknown signal,the distributed KF algorithm can be utilized to finish the filtering task in a cooperative way.At last,we illustrate the property of the proposed distributed KF algorithm by a simulation example.
基金financially supported by the National Natural Science Foundation of China(Grant No.52371261)the Science and Technology Projects of Liaoning Province(Grant No.2023011352-JH1/110).
文摘This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platform dimensional parameters in relation to motion responses.Although the three-dimensional potential flow(TDPF)panel method is recognized for its precision in calculating FOWT motion responses,its computational intensity necessitates an alternative approach for efficiency.Herein,a novel application of varying fidelity frequency-domain computational strategies is introduced,which synthesizes the strip theory with the TDPF panel method to strike a balance between computational speed and accuracy.The Co-Kriging algorithm is employed to forge a surrogate model that amalgamates these computational strategies.Optimization objectives are centered on the platform’s motion response in heave and pitch directions under general sea conditions.The steel usage,the range of design variables,and geometric considerations are optimization constraints.The angle of the pontoons,the number of columns,the radius of the central column and the parameters of the mooring lines are optimization constants.This informed the structuring of a multi-objective optimization model utilizing the Non-dominated Sorting Genetic Algorithm Ⅱ(NSGA-Ⅱ)algorithm.For the case of the IEA UMaine VolturnUS-S Reference Platform,Pareto fronts are discerned based on the above framework and delineate the relationship between competing motion response objectives.The efficacy of final designs is substantiated through the time-domain calculation model,which ensures that the motion responses in extreme sea conditions are superior to those of the initial design.
基金supported by project XJZ2023050044,A2309002 and XJZ2023070052.
文摘In the generalized continuum mechanics(GCM)theory framework,asymmetric wave equations encompass the characteristic scale parameters of the medium,accounting for microstructure interactions.This study integrates two theoretical branches of the GCM,the modified couple stress theory(M-CST)and the one-parameter second-strain-gradient theory,to form a novel asymmetric wave equation in a unified framework.Numerical modeling of the asymmetric wave equation in a unified framework accurately describes subsurface structures with vital implications for subsequent seismic wave inversion and imaging endeavors.However,employing finite-difference(FD)methods for numerical modeling may introduce numerical dispersion,adversely affecting the accuracy of numerical modeling.The design of an optimal FD operator is crucial for enhancing the accuracy of numerical modeling and emphasizing the scale effects.Therefore,this study devises a hybrid scheme called the dung beetle optimization(DBO)algorithm with a simulated annealing(SA)algorithm,denoted as the SA-based hybrid DBO(SDBO)algorithm.An FD operator optimization method under the SDBO algorithm was developed and applied to the numerical modeling of asymmetric wave equations in a unified framework.Integrating the DBO and SA algorithms mitigates the risk of convergence to a local extreme.The numerical dispersion outcomes underscore that the proposed SDBO algorithm yields FD operators with precision errors constrained to 0.5‱while encompassing a broader spectrum coverage.This result confirms the efficacy of the SDBO algorithm.Ultimately,the numerical modeling results demonstrate that the new FD method based on the SDBO algorithm effectively suppresses numerical dispersion and enhances the accuracy of elastic wave numerical modeling,thereby accentuating scale effects.This result is significant for extracting wavefield perturbations induced by complex microstructures in the medium and the analysis of scale effects.
文摘This research paper presents a comprehensive investigation into the effectiveness of the DeepSurNet-NSGA II(Deep Surrogate Model-Assisted Non-dominated Sorting Genetic Algorithm II)for solving complex multiobjective optimization problems,with a particular focus on robotic leg-linkage design.The study introduces an innovative approach that integrates deep learning-based surrogate models with the robust Non-dominated Sorting Genetic Algorithm II,aiming to enhance the efficiency and precision of the optimization process.Through a series of empirical experiments and algorithmic analyses,the paper demonstrates a high degree of correlation between solutions generated by the DeepSurNet-NSGA II and those obtained from direct experimental methods,underscoring the algorithm’s capability to accurately approximate the Pareto-optimal frontier while significantly reducing computational demands.The methodology encompasses a detailed exploration of the algorithm’s configuration,the experimental setup,and the criteria for performance evaluation,ensuring the reproducibility of results and facilitating future advancements in the field.The findings of this study not only confirm the practical applicability and theoretical soundness of the DeepSurNet-NSGA II in navigating the intricacies of multi-objective optimization but also highlight its potential as a transformative tool in engineering and design optimization.By bridging the gap between complex optimization challenges and achievable solutions,this research contributes valuable insights into the optimization domain,offering a promising direction for future inquiries and technological innovations.
基金National Natural Science Foundation of China(11971211,12171388).
文摘Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.
基金sponsored by the General Program of the National Natural Science Foundation of China(Grant Nos.52079129 and 52209148)the Hubei Provincial General Fund,China(Grant No.2023AFB567)。
文摘Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual connections of large-scale fractures.Notably,this model efficiently manages over 20,000 fractures without necessitating adjustments to the DFN geometry.All geometric analyses,such as identifying connected fractures,dividing the two-dimensional domain into closed loops,triangulating arbitrary loops,and refining triangular elements,are fully automated.The analysis processes are comprehensively introduced,and core algorithms,along with their pseudo-codes,are outlined and explained to assist readers in their programming endeavors.The accuracy of geometric analyses is validated through topological graphs representing the connection relationships between fractures.In practical application,the proposed model is employed to assess the water-sealing effectiveness of an underground storage cavern project.The analysis results indicate that the existing design scheme can effectively prevent the stored oil from leaking in the presence of both dense and sparse fractures.Furthermore,following extensive modification and optimization,the scale and precision of model computation suggest that the proposed model and developed codes can meet the requirements of engineering applications.
基金supported by the Fundamental Research Funds for the Central Universities (No.3122020072)the Multi-investment Project of Tianjin Applied Basic Research(No.23JCQNJC00250)。
文摘A hybrid identification model based on multilayer artificial neural networks(ANNs) and particle swarm optimization(PSO) algorithm is developed to improve the simultaneous identification efficiency of thermal conductivity and effective absorption coefficient of semitransparent materials.For the direct model,the spherical harmonic method and the finite volume method are used to solve the coupled conduction-radiation heat transfer problem in an absorbing,emitting,and non-scattering 2D axisymmetric gray medium in the background of laser flash method.For the identification part,firstly,the temperature field and the incident radiation field in different positions are chosen as observables.Then,a traditional identification model based on PSO algorithm is established.Finally,multilayer ANNs are built to fit and replace the direct model in the traditional identification model to speed up the identification process.The results show that compared with the traditional identification model,the time cost of the hybrid identification model is reduced by about 1 000 times.Besides,the hybrid identification model remains a high level of accuracy even with measurement errors.
基金supported by the Fundamental Research Funds for the Central Universities(XJ2023005201)the National Natural Science Foundation of China(NSFC:U2267217,42141011,and 42002254).
文摘Groundwater inverse modeling is a vital technique for estimating unmeasurable model parameters and enhancing numerical simulation accuracy.This paper comprehensively reviews the current advances and future prospects of metaheuristic algorithm-based groundwater model parameter inversion.Initially,the simulation-optimization parameter estimation framework is introduced,which involves the integration of simulation models with metaheuristic algorithms.The subsequent sections explore the fundamental principles of four widely employed metaheuristic algorithms-genetic algorithm(GA),particle swarm optimization(PSO),simulated annealing(SA),and differential evolution(DE)-highlighting their recent applications in water resources research and related areas.Then,a solute transport model is designed to illustrate how to apply and evaluate these four optimization algorithms in addressing challenges related to model parameter inversion.Finally,three noteworthy directions are presented to address the common challenges among current studies,including balancing the diverse exploration and centralized exploitation within metaheuristic algorithms,local approxi-mate error of the surrogate model,and the curse of dimensionality in spatial variational heterogeneous pa-rameters.In summary,this review paper provides theoretical insights and practical guidance for further advancements in groundwater inverse modeling studies.
基金co-supported by the Foundation of Shanghai Astronautics Science and Technology Innovation,China(No.SAST2022-114)the National Natural Science Foundation of China(No.62303378),the National Natural Science Foundation of China(Nos.124B2031,12202281)the Foundation of China National Key Laboratory of Science and Technology on Test Physics&Numerical Mathematics,China(No.08-YY-2023-R11)。
文摘The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making collision avoidance significantly more challenging than that for space debris.Much existing research focuses on the continuous thrust model,whereas the impulsive maneuver model is more appropriate for long-duration and long-distance avoidance missions.Additionally,it is important to minimize the impact on the original mission while avoiding noncooperative targets.On the other hand,the existing avoidance algorithms are computationally complex and time-consuming especially with the limited computing capability of the on-board computer,posing challenges for practical engineering applications.To conquer these difficulties,this paper makes the following key contributions:(A)a turn-based(sequential decision-making)limited-area impulsive collision avoidance model considering the time delay of precision orbit determination is established for the first time;(B)a novel Selection Probability Learning Adaptive Search-depth Search Tree(SPL-ASST)algorithm is proposed for non-cooperative target avoidance,which improves the decision-making efficiency by introducing an adaptive-search-depth mechanism and a neural network into the traditional Monte Carlo Tree Search(MCTS).Numerical simulations confirm the effectiveness and efficiency of the proposed method.
文摘Compositional data, such as relative information, is a crucial aspect of machine learning and other related fields. It is typically recorded as closed data or sums to a constant, like 100%. The statistical linear model is the most used technique for identifying hidden relationships between underlying random variables of interest. However, data quality is a significant challenge in machine learning, especially when missing data is present. The linear regression model is a commonly used statistical modeling technique used in various applications to find relationships between variables of interest. When estimating linear regression parameters which are useful for things like future prediction and partial effects analysis of independent variables, maximum likelihood estimation (MLE) is the method of choice. However, many datasets contain missing observations, which can lead to costly and time-consuming data recovery. To address this issue, the expectation-maximization (EM) algorithm has been suggested as a solution for situations including missing data. The EM algorithm repeatedly finds the best estimates of parameters in statistical models that depend on variables or data that have not been observed. This is called maximum likelihood or maximum a posteriori (MAP). Using the present estimate as input, the expectation (E) step constructs a log-likelihood function. Finding the parameters that maximize the anticipated log-likelihood, as determined in the E step, is the job of the maximization (M) phase. This study looked at how well the EM algorithm worked on a made-up compositional dataset with missing observations. It used both the robust least square version and ordinary least square regression techniques. The efficacy of the EM algorithm was compared with two alternative imputation techniques, k-Nearest Neighbor (k-NN) and mean imputation (), in terms of Aitchison distances and covariance.