Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often f...Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often fail to capture the non-linear relationships among concrete constituents,especially with the growing use of supple-mentary cementitious materials and recycled aggregates.This study presents an integrated machine learning framework for concrete strength prediction,combining advanced regression models—namely CatBoost—with metaheuristic optimization algorithms,with a particular focus on the Somersaulting Spider Optimizer(SSO).A comprehensive dataset encompassing diverse mix proportions and material types was used to evaluate baseline machine learning models,including CatBoost,XGBoost,ExtraTrees,and RandomForest.Among these,CatBoost demonstrated superior accuracy across multiple performance metrics.To further enhance predictive capability,several bio-inspired optimizers were employed for hyperparameter tuning.The SSO-CatBoost hybrid achieved the lowest mean squared error and highest correlation coefficients,outperforming other metaheuristic approaches such as Genetic Algorithm,Particle Swarm Optimization,and Grey Wolf Optimizer.Statistical significance was established through Analysis of Variance and Wilcoxon signed-rank testing,confirming the robustness of the optimized models.The proposed methodology not only delivers improved predictive performance but also offers a transparent framework for mix design optimization,supporting data-driven decision making in sustainable and resilient infrastructure development.展开更多
The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous c...The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.展开更多
Wireless Sensor Networks(WSNs)have become foundational in numerous real-world applications,ranging from environmental monitoring and industrial automation to healthcare systems and smart city development.As these netw...Wireless Sensor Networks(WSNs)have become foundational in numerous real-world applications,ranging from environmental monitoring and industrial automation to healthcare systems and smart city development.As these networks continue to grow in scale and complexity,the need for energy-efficient,scalable,and robust communication protocols becomes more critical than ever.Metaheuristic algorithms have shown significant promise in addressing these challenges,offering flexible and effective solutions for optimizing WSN performance.Among them,the Grey Wolf Optimizer(GWO)algorithm has attracted growing attention due to its simplicity,fast convergence,and strong global search capabilities.Accordingly,this survey provides an in-depth review of the applications of GWO and its variants for clustering,multi-hop routing,and hybrid cluster-based routing in WSNs.We categorize and analyze the existing GWO-based approaches across these key network optimization tasks,discussing the different problem formulations,decision variables,objective functions,and performance metrics used.In doing so,we examine standard GWO,multi-objective GWO,and hybrid GWO models that incorporate other computational intelligence techniques.Each method is evaluated based on how effectively it addresses the core constraints of WSNs,including energy consumption,communication overhead,and network lifetime.Finally,this survey outlines existing gaps in the literature and proposes potential future research directions aimed at enhancing the effectiveness and real-world applicability of GWO-based techniques for WSN clustering and routing.Our goal is to provide researchers and practitioners with a clear,structured understanding of the current state of GWO in WSNs and inspire further innovation in this evolving field.展开更多
Optimization algorithms are crucial for solving NP-hard problems in engineering and computational sciences.Metaheuristic algorithms,in particular,have proven highly effective in complex optimization scenarios characte...Optimization algorithms are crucial for solving NP-hard problems in engineering and computational sciences.Metaheuristic algorithms,in particular,have proven highly effective in complex optimization scenarios characterized by high dimensionality and intricate variable relationships.The Mountain Gazelle Optimizer(MGO)is notably effective but struggles to balance local search refinement and global space exploration,often leading to premature convergence and entrapment in local optima.This paper presents the Improved MGO(IMGO),which integrates three synergistic enhancements:dynamic chaos mapping using piecewise chaotic sequences to boost explo-ration diversity;Opposition-Based Learning(OBL)with adaptive,diversity-driven activation to speed up convergence;and structural refinements to the position update mechanisms to enhance exploitation.The IMGO underwent a comprehensive evaluation using 52 standardised benchmark functions and seven engineering optimization problems.Benchmark evaluations showed that IMGO achieved the highest rank in best solution quality for 31 functions,the highest rank in mean performance for 18 functions,and the highest rank in worst-case performance for 14 functions among 11 competing algorithms.Statistical validation using Wilcoxon signed-rank tests confirmed that IMGO outperformed individual competitors across 16 to 50 functions,depending on the algorithm.At the same time,Friedman ranking analysis placed IMGO with an average rank of 4.15,compared to the baseline MGO’s 4.38,establishing the best overall performance.The evaluation of engineering problems revealed consistent improvements,including an optimal cost of 1.6896 for the welded beam design vs.MGO’s 1.7249,a minimum cost of 5885.33 for the pressure vessel design vs.MGO’s 6300,and a minimum weight of 2964.52 kg for the speed reducer design vs.MGO’s 2990.00 kg.Ablation studies identified OBL as the strongest individual contributor,whereas complete integration achieved superior performance through synergistic interactions among components.Computational complexity analysis established an O(T×N×5×f(P))time complexity,representing a 1.25×increase in fitness evaluation relative to the baseline MGO,validating the favorable accuracy-efficiency trade-offs for practical optimization applications.展开更多
Magnetic Resonance Imaging(MRI)has a pivotal role in medical image analysis,for its ability in supporting disease detection and diagnosis.Fuzzy C-Means(FCM)clustering is widely used for MRI segmentation due to its abi...Magnetic Resonance Imaging(MRI)has a pivotal role in medical image analysis,for its ability in supporting disease detection and diagnosis.Fuzzy C-Means(FCM)clustering is widely used for MRI segmentation due to its ability to handle image uncertainty.However,the latter still has countless limitations,including sensitivity to initialization,susceptibility to local optima,and high computational cost.To address these limitations,this study integrates Grey Wolf Optimization(GWO)with FCM to enhance cluster center selection,improving segmentation accuracy and robustness.Moreover,to further refine optimization,Fuzzy Entropy Clustering was utilized for its distinctive features from other traditional objective functions.Fuzzy entropy effectively quantifies uncertainty,leading to more well-defined clusters,improved noise robustness,and better preservation of anatomical structures in MRI images.Despite these advantages,the iterative nature of GWO and FCM introduces significant computational overhead,which restricts their applicability to high-resolution medical images.To overcome this bottleneck,we propose a Parallelized-GWO-based FCM(P-GWO-FCM)approach using GPU acceleration,where both GWO optimization and FCM updates(centroid computation and membership matrix updates)are parallelized.By concurrently executing these processes,our approach efficiently distributes the computational workload,significantly reducing execution time while maintaining high segmentation accuracy.The proposed parallel method,P-GWO-FCM,was evaluated on both simulated and clinical brain MR images,focusing on segmenting white matter,gray matter,and cerebrospinal fluid regions.The results indicate significant improvements in segmentation accuracy,achieving a Jaccard Similarity(JS)of 0.92,a Partition Coefficient Index(PCI)of 0.91,a Partition Entropy Index(PEI)of 0.25,and a Davies-Bouldin Index(DBI)of 0.30.Experimental comparisons demonstrate that P-GWO-FCM outperforms existing methods in both segmentation accuracy and computational efficiency,making it a promising solution for real-time medical image segmentation.展开更多
In Wireless Sensor Networks(WSNs),survivability is a crucial issue that is greatly impacted by energy efficiency.Solutions that satisfy application objectives while extending network life are needed to address severe ...In Wireless Sensor Networks(WSNs),survivability is a crucial issue that is greatly impacted by energy efficiency.Solutions that satisfy application objectives while extending network life are needed to address severe energy constraints inWSNs.This paper presents an Adaptive Enhanced GreyWolf Optimizer(AEGWO)for energy-efficient cluster head(CH)selection that mitigates the exploration–exploitation imbalance,preserves population diversity,and avoids premature convergence inherent in baseline GWO.The AEGWO combines adaptive control of the parameter of the search pressure to accelerate convergence without stagnation,a hybrid velocity-momentum update based on the dynamics of PSO,and an intelligent mutation operator to maintain the diversity of the population.The search is guided by a multi-objective fitness,which aims at maximizing the residual energy,equal distribution of CH,minimizing the intra-cluster distance,desirable proximity to sinks,and enhancing the coverage.Simulations on 100 nodes homogeneousWSN Tested the proposed AEGWO under the same conditions with LEACH,GWO,IGWO,PSO,WOA,and GA,AEGWO significantly increases stability and lifetime compared to LEACHand other tested algorithms;it has the best first,half,and last node dead,and higher residual energy and smaller communication overhead.The findings prove that AEGWO provides sustainable energy management and better lifetime extension,which makes it a robust,flexible clustering protocol of large-scaleWSNs.展开更多
The increasing integration of cyber-physical components in Industry 4.0 water infrastructures has heightened the risk of false data injection(FDI)attacks,posing critical threats to operational integrity,resource manag...The increasing integration of cyber-physical components in Industry 4.0 water infrastructures has heightened the risk of false data injection(FDI)attacks,posing critical threats to operational integrity,resource management,and public safety.Traditional detection mechanisms often struggle to generalize across heterogeneous environments or adapt to sophisticated,stealthy threats.To address these challenges,we propose a novel evolutionary optimized transformer-based deep reinforcement learning framework(Evo-Transformer-DRL)designed for robust and adaptive FDI detection in smart water infrastructures.The proposed architecture integrates three powerful paradigms:a transformer encoder for modeling complex temporal dependencies in multivariate time series,a DRL agent for learning optimal decision policies in dynamic environments,and an evolutionary optimizer to fine-tune model hyper-parameters.This synergy enhances detection performance while maintaining adaptability across varying data distributions.Specifically,hyper-parameters of both the transformer and DRL modules are optimized using an improved grey wolf optimizer(IGWO),ensuring a balanced trade-off between detection accuracy and computational efficiency.The model is trained and evaluated on three realistic Industry 4.0 water datasets:secure water treatment(SWaT),water distribution(WADI),and battle of the attack detection algorithms(BATADAL),which capture diverse attack scenarios in smart treatment and distribution systems.Comparative analysis against state-of-the-art baselines including Transformer,DRL,bidirectional encoder representations from transformers(BERT),convolutional neural network(CNN),long short-term memory(LSTM),and support vector machines(SVM)demonstrates that our proposed Evo-Transformer-DRL framework consistently outperforms others in key metrics such as accuracy,recall,area under the curve(AUC),and execution time.Notably,it achieves a maximum detection accuracy of 99.19%,highlighting its strong generalization capability across different testbeds.These results confirm the suitability of our hybrid framework for real-world Industry 4.0 deployment,where rapid adaptation,scalability,and reliability are paramount for securing critical infrastructure systems.展开更多
Ground water is a crucial ecological resource and source of drinking water to a great percentage of theworld population.The quality of groundwater in an area with industrial emission and air pollution is an especially...Ground water is a crucial ecological resource and source of drinking water to a great percentage of theworld population.The quality of groundwater in an area with industrial emission and air pollution is an especiallyimportant issue that requires proper evaluation.This paper introduces a spatiotemporal deep learning model thatincorporates the use of metaheuristic optimization in predicting groundwater quality in various pollution contexts.Thegiven method is a combination of the Spatial-Temporal-Assisted Deep Belief Network(StaDBN)and a hybrid WhaleOptimization Algorithm and Tiki-Taka Algorithms(WOA-TTA)that would model intricate patterns of contamination.Historical ground water data sets with the hydrochemical data and time are preprocessed and pertinent and nonredundant features are determined with the Addax Optimization Algorithm(AOA).Spatial and temporal dependenciesare explicitly integrated in StaDBN architecture to facilitate representation learning,and network hyperparametersare optimized by the WOA-TTA module to increase the training efficiency and predictive performance.The modelwas coded in Python and tested based on common statistical measures,such as root mean square error(RMSE),Nash Sutcliffe efficiency(NSE),mean absolute error(MAE),and the correlation coefficient(R).The proposedGWQP-StaDBN-WOA-TTA framework demonstrates superior predictive performance and interpretability comparedto conventional machine learning and deep learning models,achieving higher correlation(R=0.963),improvedNash-Sutcliffe efficiency(NSE=0.84),and substantially lower prediction errors(MAE=0.29,RMSE=0.48),therebyvalidating its effectiveness for groundwater quality assessment under industrial and atmospheric pollution scenarios.展开更多
Under the influence of human activities,landscape fragmentation in the Wei River Basin(WRB)has become increasingly severe.Upstream development has intensified soil erosion,and industrial and agricultural pollution in ...Under the influence of human activities,landscape fragmentation in the Wei River Basin(WRB)has become increasingly severe.Upstream development has intensified soil erosion,and industrial and agricultural pollution in the middle reaches has degraded water quality.Rapid urbanization has further caused habitat fragmentation and biodiversity loss.Collectively,these challenges threaten human well-being and hinder sustainable development,making the construction and optimization of an ecological security pattern(ESP)urgently necessary.However,existing studies often fail to systematically integrate future landscape ecological risk(LER)assessment with ESP optimization.This study evaluated regional LER using the“ecological patches-ecological resistance surface(ERS)-ecological corridor”framework,combined with land-use predictions under three development scenarios,and optimized the ESP by adjusting the ERS and extracting ecological corridors.The results indicate that the LER in the WRB follows an“inverted N”distribution,with low-risk areas concentrated in forested mountain regions and high-risk areas mainly in cultivated land subject to intensive human activity.Across future scenarios,ESPs showed fewer ecological breakpoints and improved landscape connectivity than the 2020 baseline.Scenario-based differences emerged in the spatial configuration of ERS adjustments,with the ecological protection scenario yielding the lowest LER and most favorable ESP.This study demonstrates the deep integration of multi-scenario simulation with LER assessment,providing a new framework for ESP optimization.The findings have guiding significance for ecological protection and coordinated development in the WRB and offer a novel paradigm for sustainable development in ecologically fragile basins worldwide.展开更多
Variable stiffness composites present a promising solution for mitigating impact loads via varying the fiber volume fraction layer-wise,thereby adjusting the panel's stiffness.Since each layer of the composite may...Variable stiffness composites present a promising solution for mitigating impact loads via varying the fiber volume fraction layer-wise,thereby adjusting the panel's stiffness.Since each layer of the composite may be affected by a different failure mode,the optimal fiber volume fraction to suppress damage initiation and evolution is different across the layers.This research examines how re-allocating the fibers layer-wise enhances the composites'impact resistance.In this study,constant stiffness panels with the same fiber volume fraction throughout the layers are compared to variable stiffness ones by varying volume fraction layer-wise.A method is established that utilizes numerical analysis coupled with optimization techniques to determine the optimal fiber volume fraction in both scenarios.Three different reinforcement fibers(Kevlar,carbon,and glass)embedded in epoxy resin were studied.Panels were manufactured and tested under various loading conditions to validate results.Kevlar reinforcement revealed the highest tensile toughness,followed by carbon and then glass fibers.Varying reinforcement volume fraction significantly influences failure modes.Higher fractions lead to matrix cracking and debonding,while lower fractions result in more fiber breakage.The optimal volume fraction for maximizing fiber breakage energy is around 45%,whereas it is about 90%for matrix cracking and debonding.A drop tower test was used to examine the composite structure's behavior under lowvelocity impact,confirming the superiority of Kevlar-reinforced composites with variable stiffness.Conversely,glass-reinforced composites with constant stiffness revealed the lowest performance with the highest deflection.Across all reinforcement materials,the variable stiffness structure consistently outperformed its constant stiffness counterpart.展开更多
Data serves as the foundation for training and testing machine learning and artificial intelligencemodels.The most fundamental part of data is its attributes or features.The feature set size changes from one dataset t...Data serves as the foundation for training and testing machine learning and artificial intelligencemodels.The most fundamental part of data is its attributes or features.The feature set size changes from one dataset to another.Only the relevant features contributemeaningfully to classificationaccuracy.The presence of irrelevant features reduces the system’s effectiveness.Classification performance often deteriorates on high-dimensional datasets due to the large search space.Thus,one of the significant obstacles affecting the performance of the learning process in the majority of machine learning and data mining techniques is the dimensionality of the datasets.Feature selection(FS)is an effective preprocessing step in classification tasks.The aim of applying FS is to exclude redundant and unrelated features while retaining the most informative ones to optimize classification capability and compress computational complexity.In this paper,a novel hybrid binary metaheuristic algorithm,termed hSC-FPA,is proposed by hybridizing the Flower Pollination Algorithm(FPA)and the Sine Cosine Algorithm(SCA).Hybridization controls the exploration capacity of SCA and the exploitation behavior of FPA to maintain a balanced search process.SCA guides the global search in the early iterations,while FPA’s local pollination refines promising solutions in later stages.A binary conversion mechanism using a threshold function is implemented to handle the discrete nature of the feature selection problem.The functionality of the proposed hSC-FPA is authenticated on fourteen standard datasets from the UCI repository using the K-Nearest Neighbors(K-NN)classifier.Experimental results are benchmarked against the standalone SCA and FPA algorithms.The hSC-FPA consistently achieves higher classification accuracy,selects a more compact feature subset,and demonstrates superior convergence behavior.These findings support the stability and outperformance of the hybrid feature selection method presented.展开更多
The failure of liquid storage tanks,one of the most critical infrastructure systems widely used,during severe earthquakes can have direct or indirect impacts on public safety.The significance of their safe performance...The failure of liquid storage tanks,one of the most critical infrastructure systems widely used,during severe earthquakes can have direct or indirect impacts on public safety.The significance of their safe performance even after destructive earthquakes and their potential for operational use underscores the necessity of appropriate seismic design.Hence,seismic isolation,specifically base isolation,has gained attention as a seismic control method to reduce damage to these infrastructures by increasing their vibration period.One prevalent type of seismic isolator used for tanks and other structures is the friction pendulum system(FPS)isolator.However,due to its fixed period or frequency,it may be susceptible to resonance effects during long-period earthquakes.This research explores an alternative solution by investigating the variable-curvature friction pendulum isolator(VFPI).This isolator type exhibits behavior similar to that of FPS isolators under low excitations and transforms into a pure friction system under high excitations.The study proposes optimizing this VFPI,which features a polynomial function termed the Polynomial Friction Pendulum Isolator(PFPI),by introducing a suitable optimization function to minimize the acceleration transmitted to the superstructure,thereby improving the dynamic performance of the elevated storage tank.The research utilizes two wellestablished metaheuristic algorithms for optimization.It evaluates the effectiveness of the proposed isolator through time history analysis using the state space procedure under various ground motion records.Results,particularly under long-period ground motions,indicate a substantial reduction in the dynamic response of an elevated liquid storage tank equipped with the optimized PFPI.This underscores the potential of the proposed solution in enhancing the seismic resilience of liquid storage tanks.展开更多
Early and accurate detection of bone cancer and marrow cell abnormalities is critical for timely intervention and improved patient outcomes.This paper proposes a novel hybrid deep learning framework that integrates a ...Early and accurate detection of bone cancer and marrow cell abnormalities is critical for timely intervention and improved patient outcomes.This paper proposes a novel hybrid deep learning framework that integrates a Convolutional Neural Network(CNN)with a Bidirectional Long Short-Term Memory(BiLSTM)architecture,optimized using the Firefly Optimization algorithm(FO).The proposed CNN-BiLSTM-FO model is tailored for structured biomedical data,capturing both local patterns and sequential dependencies in diagnostic features,while the Firefly Algorithm fine-tunes key hyperparameters to maximize predictive performance.The approach is evaluated on two benchmark biomedical datasets:one comprising diagnostic data for bone cancer detection and another for identifying marrow cell abnormalities.Experimental results demonstrate that the proposed method outperforms standard deep learning models,including CNN,LSTM,BiLSTM,and CNN-LSTM hybrids,significantly.The CNNBiLSTM-FO model achieves an accuracy of 98.55%for bone cancer detection and 96.04%for marrow abnormality classification.The paper also presents a detailed complexity analysis of the proposed algorithm and compares its performance across multiple evaluation metrics such as precision,recall,F1-score,and AUC.The results confirm the effectiveness of the firefly-based optimization strategy in improving classification accuracy and model robustness.This work introduces a scalable and accurate diagnostic solution that holds strong potential for integration into intelligent clinical decision-support systems.展开更多
An optimized volt-ampere reactive(VAR)control framework is proposed for transmission-level power systems to simultaneously mitigate voltage deviations and active-power losses through coordinated control of large-scale...An optimized volt-ampere reactive(VAR)control framework is proposed for transmission-level power systems to simultaneously mitigate voltage deviations and active-power losses through coordinated control of large-scale wind/solar farms with shunt static var generators(SVGs).The model explicitly represents reactive-power regulation characteristics of doubly-fed wind turbines and PV inverters under real-time meteorological conditions,and quantifies SVG high-speed compensation capability,enabling seamless transition from localized VAR management to a globally coordinated strategy.An enhanced adaptive gain-sharing knowledge optimizer(AGSK-SD)integrates simulated annealing and diversity maintenance to autonomously tune voltage-control actions,renewable source reactive-power set-points,and SVG output.The algorithm adaptively modulates knowledge factors and ratios across search phases,performs SA-based fine-grained local exploitation,and periodically re-injects population diversity to prevent premature convergence.Comprehensive tests on IEEE 9-bus and 39-bus systems demonstrate AGSK-SD’s superiority over NSGA-II and MOPSO in hypervolume(HV),inverse generative distance(IGD),and spread metrics while maintaining acceptable computational burden.The method reduces network losses from 2.7191 to 2.15 MW(20.79%reduction)and from 15.1891 to 11.22 MW(26.16%reduction)in the 9-bus and 39-bus systems respectively.Simultaneously,the cumulative voltage-deviation index decreases from 0.0277 to 3.42×10^(−4) p.u.(98.77%reduction)in the 9-bus system,and from 0.0556 to 0.0107 p.u.(80.76%reduction)in the 39-bus system.These improvements demonstrate significant suppression of line losses and voltage fluctuations.Comparative analysis with traditional heuristic optimization algorithms confirms the superior performance of the proposed approach.展开更多
Due to the lack of accurate data and complex parameterization,the prediction of groundwater depth is a chal-lenge for numerical models.Machine learning can effectively solve this issue and has been proven useful in th...Due to the lack of accurate data and complex parameterization,the prediction of groundwater depth is a chal-lenge for numerical models.Machine learning can effectively solve this issue and has been proven useful in the prediction of groundwater depth in many areas.In this study,two new models are applied to the prediction of groundwater depth in the Ningxia area,China.The two models combine the improved dung beetle optimizer(DBO)algorithm with two deep learning models:The Multi-head Attention-Convolution Neural Network-Long Short Term Memory networks(MH-CNN-LSTM)and the Multi-head Attention-Convolution Neural Network-Gated Recurrent Unit(MH-CNN-GRU).The models with DBO show better prediction performance,with larger R(correlation coefficient),RPD(residual prediction deviation),and lower RMSE(root-mean-square error).Com-pared with the models with the original DBO,the R and RPD of models with the improved DBO increase by over 1.5%,and the RMSE decreases by over 1.8%,indicating better prediction results.In addition,compared with the multiple linear regression model,a traditional statistical model,deep learning models have better prediction performance.展开更多
The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by...The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by these interconnected devices,robust anomaly detection mechanisms are essential.Anomaly detection in this dynamic environment necessitates methods that can accurately distinguish between normal and anomalous behavior by learning intricate patterns.This paper presents a novel approach utilizing generative adversarial networks(GANs)for anomaly detection in IoT systems.However,optimizing GANs involves tuning hyper-parameters such as learning rate,batch size,and optimization algorithms,which can be challenging due to the non-convex nature of GAN loss functions.To address this,we propose a five-dimensional Gray wolf optimizer(5DGWO)to optimize GAN hyper-parameters.The 5DGWO introduces two new types of wolves:gamma(γ)for improved exploitation and convergence,and theta(θ)for enhanced exploration and escaping local minima.The proposed system framework comprises four key stages:1)preprocessing,2)generative model training,3)autoencoder(AE)training,and 4)predictive model training.The generative models are utilized to assist the AE training,and the final predictive models(including convolutional neural network(CNN),deep belief network(DBN),recurrent neural network(RNN),random forest(RF),and extreme gradient boosting(XGBoost))are trained using the generated data and AE-encoded features.We evaluated the system on three benchmark datasets:NSL-KDD,UNSW-NB15,and IoT-23.Experiments conducted on diverse IoT datasets show that our method outperforms existing anomaly detection strategies and significantly reduces false positives.The 5DGWO-GAN-CNNAE exhibits superior performance in various metrics,including accuracy,recall,precision,root mean square error(RMSE),and convergence trend.The proposed 5DGWO-GAN-CNNAE achieved the lowest RMSE values across the NSL-KDD,UNSW-NB15,and IoT-23 datasets,with values of 0.24,1.10,and 0.09,respectively.Additionally,it attained the highest accuracy,ranging from 94%to 100%.These results suggest a promising direction for future IoT security frameworks,offering a scalable and efficient solution to safeguard against evolving cyber threats.展开更多
This paper introduces the Surrogate-assisted Multi-objective Grey Wolf Optimizer(SMOGWO)as a novel methodology for addressing the complex problem of empty-heavy train allocation,with a focus on line utilization balanc...This paper introduces the Surrogate-assisted Multi-objective Grey Wolf Optimizer(SMOGWO)as a novel methodology for addressing the complex problem of empty-heavy train allocation,with a focus on line utilization balance.By integrating surrogate models to approximate the objective functions,SMOGWO significantly improves the efficiency and accuracy of the optimization process.The effectiveness of this approach is evaluated using the CEC2009 multi-objective test function suite,where SMOGWO achieves a superiority rate of 76.67%compared to other leading multi-objective algorithms.Furthermore,the practical applicability of SMOGWO is demonstrated through a case study on empty and heavy train allocation,which validates its ability to balance line capacity,minimize transportation costs,and optimize the technical combination of heavy trains.The research highlights SMOGWO's potential as a robust solution for optimization challenges in railway transportation,offering valuable contributions toward enhancing operational efficiency and promoting sustainable development in the sector.展开更多
Stereoscopic agriculture,as an advanced method of agricultural production,poses new challenges for multi-task trajectory planning of unmanned aerial vehicles(UAVs).To address the need for UAVs to perform multi-task tr...Stereoscopic agriculture,as an advanced method of agricultural production,poses new challenges for multi-task trajectory planning of unmanned aerial vehicles(UAVs).To address the need for UAVs to perform multi-task trajectory planning in stereoscopic agriculture,a multi-task trajectory planning model and algorithm(IEP-AO)that synthesizes flight safety and flight efficiency is proposed.Based on the requirements of stereoscopic agricultural geomorphological features and operational characteristics,the multi-task trajectory planning model is ensured by constructing targeted constraints at five aspects,including the path,slope,altitude,corner,energy and obstacle threat,to improve the effectiveness of the trajectory planning model.And combined with the path optimization algorithm,an Aquila optimizer(IEP-AO)based on the interference-enhanced combination model is proposed,which can help UAVs to improve the trajectory search capability in complex operation space and large-scale operation tasks,and jump out of the locally optimal trajectory path region timely,to generate the optimal trajectory planning plan that can adapt to the diversity of the tasks and the flight efficiency.Meanwhile,four simulated flights with different operation scales and different scene constraints were conducted under the constructed real 3Dimension scene,and the experimental results can show that the proposedmulti-task trajectory planning method canmeet themulti-task requirements in stereoscopic agriculture and improve the mission execution efficiency and agricultural production effect of UAV.展开更多
文摘Accurate prediction of concrete compressive strength is fundamental for optimizing mix designs,improving material utilization,and ensuring structural safety in modern construction.Traditional empirical methods often fail to capture the non-linear relationships among concrete constituents,especially with the growing use of supple-mentary cementitious materials and recycled aggregates.This study presents an integrated machine learning framework for concrete strength prediction,combining advanced regression models—namely CatBoost—with metaheuristic optimization algorithms,with a particular focus on the Somersaulting Spider Optimizer(SSO).A comprehensive dataset encompassing diverse mix proportions and material types was used to evaluate baseline machine learning models,including CatBoost,XGBoost,ExtraTrees,and RandomForest.Among these,CatBoost demonstrated superior accuracy across multiple performance metrics.To further enhance predictive capability,several bio-inspired optimizers were employed for hyperparameter tuning.The SSO-CatBoost hybrid achieved the lowest mean squared error and highest correlation coefficients,outperforming other metaheuristic approaches such as Genetic Algorithm,Particle Swarm Optimization,and Grey Wolf Optimizer.Statistical significance was established through Analysis of Variance and Wilcoxon signed-rank testing,confirming the robustness of the optimized models.The proposed methodology not only delivers improved predictive performance but also offers a transparent framework for mix design optimization,supporting data-driven decision making in sustainable and resilient infrastructure development.
基金appreciation to the Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R384)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘The cloud-fog computing paradigm has emerged as a novel hybrid computing model that integrates computational resources at both fog nodes and cloud servers to address the challenges posed by dynamic and heterogeneous computing networks.Finding an optimal computational resource for task offloading and then executing efficiently is a critical issue to achieve a trade-off between energy consumption and transmission delay.In this network,the task processed at fog nodes reduces transmission delay.Still,it increases energy consumption,while routing tasks to the cloud server saves energy at the cost of higher communication delay.Moreover,the order in which offloaded tasks are executed affects the system’s efficiency.For instance,executing lower-priority tasks before higher-priority jobs can disturb the reliability and stability of the system.Therefore,an efficient strategy of optimal computation offloading and task scheduling is required for operational efficacy.In this paper,we introduced a multi-objective and enhanced version of Cheeta Optimizer(CO),namely(MoECO),to jointly optimize the computation offloading and task scheduling in cloud-fog networks to minimize two competing objectives,i.e.,energy consumption and communication delay.MoECO first assigns tasks to the optimal computational nodes and then the allocated tasks are scheduled for processing based on the task priority.The mathematical modelling of CO needs improvement in computation time and convergence speed.Therefore,MoECO is proposed to increase the search capability of agents by controlling the search strategy based on a leader’s location.The adaptive step length operator is adjusted to diversify the solution and thus improves the exploration phase,i.e.,global search strategy.Consequently,this prevents the algorithm from getting trapped in the local optimal solution.Moreover,the interaction factor during the exploitation phase is also adjusted based on the location of the prey instead of the adjacent Cheetah.This increases the exploitation capability of agents,i.e.,local search capability.Furthermore,MoECO employs a multi-objective Pareto-optimal front to simultaneously minimize designated objectives.Comprehensive simulations in MATLAB demonstrate that the proposed algorithm obtains multiple solutions via a Pareto-optimal front and achieves an efficient trade-off between optimization objectives compared to baseline methods.
文摘Wireless Sensor Networks(WSNs)have become foundational in numerous real-world applications,ranging from environmental monitoring and industrial automation to healthcare systems and smart city development.As these networks continue to grow in scale and complexity,the need for energy-efficient,scalable,and robust communication protocols becomes more critical than ever.Metaheuristic algorithms have shown significant promise in addressing these challenges,offering flexible and effective solutions for optimizing WSN performance.Among them,the Grey Wolf Optimizer(GWO)algorithm has attracted growing attention due to its simplicity,fast convergence,and strong global search capabilities.Accordingly,this survey provides an in-depth review of the applications of GWO and its variants for clustering,multi-hop routing,and hybrid cluster-based routing in WSNs.We categorize and analyze the existing GWO-based approaches across these key network optimization tasks,discussing the different problem formulations,decision variables,objective functions,and performance metrics used.In doing so,we examine standard GWO,multi-objective GWO,and hybrid GWO models that incorporate other computational intelligence techniques.Each method is evaluated based on how effectively it addresses the core constraints of WSNs,including energy consumption,communication overhead,and network lifetime.Finally,this survey outlines existing gaps in the literature and proposes potential future research directions aimed at enhancing the effectiveness and real-world applicability of GWO-based techniques for WSN clustering and routing.Our goal is to provide researchers and practitioners with a clear,structured understanding of the current state of GWO in WSNs and inspire further innovation in this evolving field.
文摘Optimization algorithms are crucial for solving NP-hard problems in engineering and computational sciences.Metaheuristic algorithms,in particular,have proven highly effective in complex optimization scenarios characterized by high dimensionality and intricate variable relationships.The Mountain Gazelle Optimizer(MGO)is notably effective but struggles to balance local search refinement and global space exploration,often leading to premature convergence and entrapment in local optima.This paper presents the Improved MGO(IMGO),which integrates three synergistic enhancements:dynamic chaos mapping using piecewise chaotic sequences to boost explo-ration diversity;Opposition-Based Learning(OBL)with adaptive,diversity-driven activation to speed up convergence;and structural refinements to the position update mechanisms to enhance exploitation.The IMGO underwent a comprehensive evaluation using 52 standardised benchmark functions and seven engineering optimization problems.Benchmark evaluations showed that IMGO achieved the highest rank in best solution quality for 31 functions,the highest rank in mean performance for 18 functions,and the highest rank in worst-case performance for 14 functions among 11 competing algorithms.Statistical validation using Wilcoxon signed-rank tests confirmed that IMGO outperformed individual competitors across 16 to 50 functions,depending on the algorithm.At the same time,Friedman ranking analysis placed IMGO with an average rank of 4.15,compared to the baseline MGO’s 4.38,establishing the best overall performance.The evaluation of engineering problems revealed consistent improvements,including an optimal cost of 1.6896 for the welded beam design vs.MGO’s 1.7249,a minimum cost of 5885.33 for the pressure vessel design vs.MGO’s 6300,and a minimum weight of 2964.52 kg for the speed reducer design vs.MGO’s 2990.00 kg.Ablation studies identified OBL as the strongest individual contributor,whereas complete integration achieved superior performance through synergistic interactions among components.Computational complexity analysis established an O(T×N×5×f(P))time complexity,representing a 1.25×increase in fitness evaluation relative to the baseline MGO,validating the favorable accuracy-efficiency trade-offs for practical optimization applications.
文摘Magnetic Resonance Imaging(MRI)has a pivotal role in medical image analysis,for its ability in supporting disease detection and diagnosis.Fuzzy C-Means(FCM)clustering is widely used for MRI segmentation due to its ability to handle image uncertainty.However,the latter still has countless limitations,including sensitivity to initialization,susceptibility to local optima,and high computational cost.To address these limitations,this study integrates Grey Wolf Optimization(GWO)with FCM to enhance cluster center selection,improving segmentation accuracy and robustness.Moreover,to further refine optimization,Fuzzy Entropy Clustering was utilized for its distinctive features from other traditional objective functions.Fuzzy entropy effectively quantifies uncertainty,leading to more well-defined clusters,improved noise robustness,and better preservation of anatomical structures in MRI images.Despite these advantages,the iterative nature of GWO and FCM introduces significant computational overhead,which restricts their applicability to high-resolution medical images.To overcome this bottleneck,we propose a Parallelized-GWO-based FCM(P-GWO-FCM)approach using GPU acceleration,where both GWO optimization and FCM updates(centroid computation and membership matrix updates)are parallelized.By concurrently executing these processes,our approach efficiently distributes the computational workload,significantly reducing execution time while maintaining high segmentation accuracy.The proposed parallel method,P-GWO-FCM,was evaluated on both simulated and clinical brain MR images,focusing on segmenting white matter,gray matter,and cerebrospinal fluid regions.The results indicate significant improvements in segmentation accuracy,achieving a Jaccard Similarity(JS)of 0.92,a Partition Coefficient Index(PCI)of 0.91,a Partition Entropy Index(PEI)of 0.25,and a Davies-Bouldin Index(DBI)of 0.30.Experimental comparisons demonstrate that P-GWO-FCM outperforms existing methods in both segmentation accuracy and computational efficiency,making it a promising solution for real-time medical image segmentation.
基金The Open Access publication fee for this article was fully covered by Abu Dhabi University.
文摘In Wireless Sensor Networks(WSNs),survivability is a crucial issue that is greatly impacted by energy efficiency.Solutions that satisfy application objectives while extending network life are needed to address severe energy constraints inWSNs.This paper presents an Adaptive Enhanced GreyWolf Optimizer(AEGWO)for energy-efficient cluster head(CH)selection that mitigates the exploration–exploitation imbalance,preserves population diversity,and avoids premature convergence inherent in baseline GWO.The AEGWO combines adaptive control of the parameter of the search pressure to accelerate convergence without stagnation,a hybrid velocity-momentum update based on the dynamics of PSO,and an intelligent mutation operator to maintain the diversity of the population.The search is guided by a multi-objective fitness,which aims at maximizing the residual energy,equal distribution of CH,minimizing the intra-cluster distance,desirable proximity to sinks,and enhancing the coverage.Simulations on 100 nodes homogeneousWSN Tested the proposed AEGWO under the same conditions with LEACH,GWO,IGWO,PSO,WOA,and GA,AEGWO significantly increases stability and lifetime compared to LEACHand other tested algorithms;it has the best first,half,and last node dead,and higher residual energy and smaller communication overhead.The findings prove that AEGWO provides sustainable energy management and better lifetime extension,which makes it a robust,flexible clustering protocol of large-scaleWSNs.
文摘The increasing integration of cyber-physical components in Industry 4.0 water infrastructures has heightened the risk of false data injection(FDI)attacks,posing critical threats to operational integrity,resource management,and public safety.Traditional detection mechanisms often struggle to generalize across heterogeneous environments or adapt to sophisticated,stealthy threats.To address these challenges,we propose a novel evolutionary optimized transformer-based deep reinforcement learning framework(Evo-Transformer-DRL)designed for robust and adaptive FDI detection in smart water infrastructures.The proposed architecture integrates three powerful paradigms:a transformer encoder for modeling complex temporal dependencies in multivariate time series,a DRL agent for learning optimal decision policies in dynamic environments,and an evolutionary optimizer to fine-tune model hyper-parameters.This synergy enhances detection performance while maintaining adaptability across varying data distributions.Specifically,hyper-parameters of both the transformer and DRL modules are optimized using an improved grey wolf optimizer(IGWO),ensuring a balanced trade-off between detection accuracy and computational efficiency.The model is trained and evaluated on three realistic Industry 4.0 water datasets:secure water treatment(SWaT),water distribution(WADI),and battle of the attack detection algorithms(BATADAL),which capture diverse attack scenarios in smart treatment and distribution systems.Comparative analysis against state-of-the-art baselines including Transformer,DRL,bidirectional encoder representations from transformers(BERT),convolutional neural network(CNN),long short-term memory(LSTM),and support vector machines(SVM)demonstrates that our proposed Evo-Transformer-DRL framework consistently outperforms others in key metrics such as accuracy,recall,area under the curve(AUC),and execution time.Notably,it achieves a maximum detection accuracy of 99.19%,highlighting its strong generalization capability across different testbeds.These results confirm the suitability of our hybrid framework for real-world Industry 4.0 deployment,where rapid adaptation,scalability,and reliability are paramount for securing critical infrastructure systems.
基金Fund for funding this research work under Research Support Program for Central labs at King Khalid University through the project number CL/CO/B/6.
文摘Ground water is a crucial ecological resource and source of drinking water to a great percentage of theworld population.The quality of groundwater in an area with industrial emission and air pollution is an especiallyimportant issue that requires proper evaluation.This paper introduces a spatiotemporal deep learning model thatincorporates the use of metaheuristic optimization in predicting groundwater quality in various pollution contexts.Thegiven method is a combination of the Spatial-Temporal-Assisted Deep Belief Network(StaDBN)and a hybrid WhaleOptimization Algorithm and Tiki-Taka Algorithms(WOA-TTA)that would model intricate patterns of contamination.Historical ground water data sets with the hydrochemical data and time are preprocessed and pertinent and nonredundant features are determined with the Addax Optimization Algorithm(AOA).Spatial and temporal dependenciesare explicitly integrated in StaDBN architecture to facilitate representation learning,and network hyperparametersare optimized by the WOA-TTA module to increase the training efficiency and predictive performance.The modelwas coded in Python and tested based on common statistical measures,such as root mean square error(RMSE),Nash Sutcliffe efficiency(NSE),mean absolute error(MAE),and the correlation coefficient(R).The proposedGWQP-StaDBN-WOA-TTA framework demonstrates superior predictive performance and interpretability comparedto conventional machine learning and deep learning models,achieving higher correlation(R=0.963),improvedNash-Sutcliffe efficiency(NSE=0.84),and substantially lower prediction errors(MAE=0.29,RMSE=0.48),therebyvalidating its effectiveness for groundwater quality assessment under industrial and atmospheric pollution scenarios.
基金supported by the National Natural Science Foundation of China[Grant No.42361040].
文摘Under the influence of human activities,landscape fragmentation in the Wei River Basin(WRB)has become increasingly severe.Upstream development has intensified soil erosion,and industrial and agricultural pollution in the middle reaches has degraded water quality.Rapid urbanization has further caused habitat fragmentation and biodiversity loss.Collectively,these challenges threaten human well-being and hinder sustainable development,making the construction and optimization of an ecological security pattern(ESP)urgently necessary.However,existing studies often fail to systematically integrate future landscape ecological risk(LER)assessment with ESP optimization.This study evaluated regional LER using the“ecological patches-ecological resistance surface(ERS)-ecological corridor”framework,combined with land-use predictions under three development scenarios,and optimized the ESP by adjusting the ERS and extracting ecological corridors.The results indicate that the LER in the WRB follows an“inverted N”distribution,with low-risk areas concentrated in forested mountain regions and high-risk areas mainly in cultivated land subject to intensive human activity.Across future scenarios,ESPs showed fewer ecological breakpoints and improved landscape connectivity than the 2020 baseline.Scenario-based differences emerged in the spatial configuration of ERS adjustments,with the ecological protection scenario yielding the lowest LER and most favorable ESP.This study demonstrates the deep integration of multi-scenario simulation with LER assessment,providing a new framework for ESP optimization.The findings have guiding significance for ecological protection and coordinated development in the WRB and offer a novel paradigm for sustainable development in ecologically fragile basins worldwide.
基金funded by the American University of Sharjah.United Arab Emirates award number EN 9502-FRG19-M-E75。
文摘Variable stiffness composites present a promising solution for mitigating impact loads via varying the fiber volume fraction layer-wise,thereby adjusting the panel's stiffness.Since each layer of the composite may be affected by a different failure mode,the optimal fiber volume fraction to suppress damage initiation and evolution is different across the layers.This research examines how re-allocating the fibers layer-wise enhances the composites'impact resistance.In this study,constant stiffness panels with the same fiber volume fraction throughout the layers are compared to variable stiffness ones by varying volume fraction layer-wise.A method is established that utilizes numerical analysis coupled with optimization techniques to determine the optimal fiber volume fraction in both scenarios.Three different reinforcement fibers(Kevlar,carbon,and glass)embedded in epoxy resin were studied.Panels were manufactured and tested under various loading conditions to validate results.Kevlar reinforcement revealed the highest tensile toughness,followed by carbon and then glass fibers.Varying reinforcement volume fraction significantly influences failure modes.Higher fractions lead to matrix cracking and debonding,while lower fractions result in more fiber breakage.The optimal volume fraction for maximizing fiber breakage energy is around 45%,whereas it is about 90%for matrix cracking and debonding.A drop tower test was used to examine the composite structure's behavior under lowvelocity impact,confirming the superiority of Kevlar-reinforced composites with variable stiffness.Conversely,glass-reinforced composites with constant stiffness revealed the lowest performance with the highest deflection.Across all reinforcement materials,the variable stiffness structure consistently outperformed its constant stiffness counterpart.
基金supported by a research grant from Lahore College for Women University(LCWU),Lahore,Pakistan.
文摘Data serves as the foundation for training and testing machine learning and artificial intelligencemodels.The most fundamental part of data is its attributes or features.The feature set size changes from one dataset to another.Only the relevant features contributemeaningfully to classificationaccuracy.The presence of irrelevant features reduces the system’s effectiveness.Classification performance often deteriorates on high-dimensional datasets due to the large search space.Thus,one of the significant obstacles affecting the performance of the learning process in the majority of machine learning and data mining techniques is the dimensionality of the datasets.Feature selection(FS)is an effective preprocessing step in classification tasks.The aim of applying FS is to exclude redundant and unrelated features while retaining the most informative ones to optimize classification capability and compress computational complexity.In this paper,a novel hybrid binary metaheuristic algorithm,termed hSC-FPA,is proposed by hybridizing the Flower Pollination Algorithm(FPA)and the Sine Cosine Algorithm(SCA).Hybridization controls the exploration capacity of SCA and the exploitation behavior of FPA to maintain a balanced search process.SCA guides the global search in the early iterations,while FPA’s local pollination refines promising solutions in later stages.A binary conversion mechanism using a threshold function is implemented to handle the discrete nature of the feature selection problem.The functionality of the proposed hSC-FPA is authenticated on fourteen standard datasets from the UCI repository using the K-Nearest Neighbors(K-NN)classifier.Experimental results are benchmarked against the standalone SCA and FPA algorithms.The hSC-FPA consistently achieves higher classification accuracy,selects a more compact feature subset,and demonstrates superior convergence behavior.These findings support the stability and outperformance of the hybrid feature selection method presented.
文摘The failure of liquid storage tanks,one of the most critical infrastructure systems widely used,during severe earthquakes can have direct or indirect impacts on public safety.The significance of their safe performance even after destructive earthquakes and their potential for operational use underscores the necessity of appropriate seismic design.Hence,seismic isolation,specifically base isolation,has gained attention as a seismic control method to reduce damage to these infrastructures by increasing their vibration period.One prevalent type of seismic isolator used for tanks and other structures is the friction pendulum system(FPS)isolator.However,due to its fixed period or frequency,it may be susceptible to resonance effects during long-period earthquakes.This research explores an alternative solution by investigating the variable-curvature friction pendulum isolator(VFPI).This isolator type exhibits behavior similar to that of FPS isolators under low excitations and transforms into a pure friction system under high excitations.The study proposes optimizing this VFPI,which features a polynomial function termed the Polynomial Friction Pendulum Isolator(PFPI),by introducing a suitable optimization function to minimize the acceleration transmitted to the superstructure,thereby improving the dynamic performance of the elevated storage tank.The research utilizes two wellestablished metaheuristic algorithms for optimization.It evaluates the effectiveness of the proposed isolator through time history analysis using the state space procedure under various ground motion records.Results,particularly under long-period ground motions,indicate a substantial reduction in the dynamic response of an elevated liquid storage tank equipped with the optimized PFPI.This underscores the potential of the proposed solution in enhancing the seismic resilience of liquid storage tanks.
文摘Early and accurate detection of bone cancer and marrow cell abnormalities is critical for timely intervention and improved patient outcomes.This paper proposes a novel hybrid deep learning framework that integrates a Convolutional Neural Network(CNN)with a Bidirectional Long Short-Term Memory(BiLSTM)architecture,optimized using the Firefly Optimization algorithm(FO).The proposed CNN-BiLSTM-FO model is tailored for structured biomedical data,capturing both local patterns and sequential dependencies in diagnostic features,while the Firefly Algorithm fine-tunes key hyperparameters to maximize predictive performance.The approach is evaluated on two benchmark biomedical datasets:one comprising diagnostic data for bone cancer detection and another for identifying marrow cell abnormalities.Experimental results demonstrate that the proposed method outperforms standard deep learning models,including CNN,LSTM,BiLSTM,and CNN-LSTM hybrids,significantly.The CNNBiLSTM-FO model achieves an accuracy of 98.55%for bone cancer detection and 96.04%for marrow abnormality classification.The paper also presents a detailed complexity analysis of the proposed algorithm and compares its performance across multiple evaluation metrics such as precision,recall,F1-score,and AUC.The results confirm the effectiveness of the firefly-based optimization strategy in improving classification accuracy and model robustness.This work introduces a scalable and accurate diagnostic solution that holds strong potential for integration into intelligent clinical decision-support systems.
基金supported by Yunnan Power Grid Co.,Ltd.Science and Technology Project:Research and application of key technologies for graphical-based power grid accident reconstruction and simulation(YNKJXM20240333).
文摘An optimized volt-ampere reactive(VAR)control framework is proposed for transmission-level power systems to simultaneously mitigate voltage deviations and active-power losses through coordinated control of large-scale wind/solar farms with shunt static var generators(SVGs).The model explicitly represents reactive-power regulation characteristics of doubly-fed wind turbines and PV inverters under real-time meteorological conditions,and quantifies SVG high-speed compensation capability,enabling seamless transition from localized VAR management to a globally coordinated strategy.An enhanced adaptive gain-sharing knowledge optimizer(AGSK-SD)integrates simulated annealing and diversity maintenance to autonomously tune voltage-control actions,renewable source reactive-power set-points,and SVG output.The algorithm adaptively modulates knowledge factors and ratios across search phases,performs SA-based fine-grained local exploitation,and periodically re-injects population diversity to prevent premature convergence.Comprehensive tests on IEEE 9-bus and 39-bus systems demonstrate AGSK-SD’s superiority over NSGA-II and MOPSO in hypervolume(HV),inverse generative distance(IGD),and spread metrics while maintaining acceptable computational burden.The method reduces network losses from 2.7191 to 2.15 MW(20.79%reduction)and from 15.1891 to 11.22 MW(26.16%reduction)in the 9-bus and 39-bus systems respectively.Simultaneously,the cumulative voltage-deviation index decreases from 0.0277 to 3.42×10^(−4) p.u.(98.77%reduction)in the 9-bus system,and from 0.0556 to 0.0107 p.u.(80.76%reduction)in the 39-bus system.These improvements demonstrate significant suppression of line losses and voltage fluctuations.Comparative analysis with traditional heuristic optimization algorithms confirms the superior performance of the proposed approach.
基金supported by the National Natural Science Foundation of China [grant numbers 42088101 and 42375048]。
文摘Due to the lack of accurate data and complex parameterization,the prediction of groundwater depth is a chal-lenge for numerical models.Machine learning can effectively solve this issue and has been proven useful in the prediction of groundwater depth in many areas.In this study,two new models are applied to the prediction of groundwater depth in the Ningxia area,China.The two models combine the improved dung beetle optimizer(DBO)algorithm with two deep learning models:The Multi-head Attention-Convolution Neural Network-Long Short Term Memory networks(MH-CNN-LSTM)and the Multi-head Attention-Convolution Neural Network-Gated Recurrent Unit(MH-CNN-GRU).The models with DBO show better prediction performance,with larger R(correlation coefficient),RPD(residual prediction deviation),and lower RMSE(root-mean-square error).Com-pared with the models with the original DBO,the R and RPD of models with the improved DBO increase by over 1.5%,and the RMSE decreases by over 1.8%,indicating better prediction results.In addition,compared with the multiple linear regression model,a traditional statistical model,deep learning models have better prediction performance.
基金described in this paper has been developed with in the project PRESECREL(PID2021-124502OB-C43)。
文摘The Internet of Things(IoT)is integral to modern infrastructure,enabling connectivity among a wide range of devices from home automation to industrial control systems.With the exponential increase in data generated by these interconnected devices,robust anomaly detection mechanisms are essential.Anomaly detection in this dynamic environment necessitates methods that can accurately distinguish between normal and anomalous behavior by learning intricate patterns.This paper presents a novel approach utilizing generative adversarial networks(GANs)for anomaly detection in IoT systems.However,optimizing GANs involves tuning hyper-parameters such as learning rate,batch size,and optimization algorithms,which can be challenging due to the non-convex nature of GAN loss functions.To address this,we propose a five-dimensional Gray wolf optimizer(5DGWO)to optimize GAN hyper-parameters.The 5DGWO introduces two new types of wolves:gamma(γ)for improved exploitation and convergence,and theta(θ)for enhanced exploration and escaping local minima.The proposed system framework comprises four key stages:1)preprocessing,2)generative model training,3)autoencoder(AE)training,and 4)predictive model training.The generative models are utilized to assist the AE training,and the final predictive models(including convolutional neural network(CNN),deep belief network(DBN),recurrent neural network(RNN),random forest(RF),and extreme gradient boosting(XGBoost))are trained using the generated data and AE-encoded features.We evaluated the system on three benchmark datasets:NSL-KDD,UNSW-NB15,and IoT-23.Experiments conducted on diverse IoT datasets show that our method outperforms existing anomaly detection strategies and significantly reduces false positives.The 5DGWO-GAN-CNNAE exhibits superior performance in various metrics,including accuracy,recall,precision,root mean square error(RMSE),and convergence trend.The proposed 5DGWO-GAN-CNNAE achieved the lowest RMSE values across the NSL-KDD,UNSW-NB15,and IoT-23 datasets,with values of 0.24,1.10,and 0.09,respectively.Additionally,it attained the highest accuracy,ranging from 94%to 100%.These results suggest a promising direction for future IoT security frameworks,offering a scalable and efficient solution to safeguard against evolving cyber threats.
基金supported by the National Natural Science Foundation of China(Project No.5217232152102391)+2 种基金Sichuan Province Science and Technology Innovation Talent Project(2024JDRC0020)China Shenhua Energy Company Limited Technology Project(GJNY-22-7/2300-K1220053)Key science and technology projects in the transportation industry of the Ministry of Transport(2022-ZD7-132).
文摘This paper introduces the Surrogate-assisted Multi-objective Grey Wolf Optimizer(SMOGWO)as a novel methodology for addressing the complex problem of empty-heavy train allocation,with a focus on line utilization balance.By integrating surrogate models to approximate the objective functions,SMOGWO significantly improves the efficiency and accuracy of the optimization process.The effectiveness of this approach is evaluated using the CEC2009 multi-objective test function suite,where SMOGWO achieves a superiority rate of 76.67%compared to other leading multi-objective algorithms.Furthermore,the practical applicability of SMOGWO is demonstrated through a case study on empty and heavy train allocation,which validates its ability to balance line capacity,minimize transportation costs,and optimize the technical combination of heavy trains.The research highlights SMOGWO's potential as a robust solution for optimization challenges in railway transportation,offering valuable contributions toward enhancing operational efficiency and promoting sustainable development in the sector.
基金funded by the Jiangxi Provincial Social Science Planning Project(21GL12)Jiangxi Provincial Higher Education Humanities and Social Sciences Planning Project(GL22232)Jiangxi Province College Students’Innovation and Entrepreneurship Training Program Project(S20241041027).
文摘Stereoscopic agriculture,as an advanced method of agricultural production,poses new challenges for multi-task trajectory planning of unmanned aerial vehicles(UAVs).To address the need for UAVs to perform multi-task trajectory planning in stereoscopic agriculture,a multi-task trajectory planning model and algorithm(IEP-AO)that synthesizes flight safety and flight efficiency is proposed.Based on the requirements of stereoscopic agricultural geomorphological features and operational characteristics,the multi-task trajectory planning model is ensured by constructing targeted constraints at five aspects,including the path,slope,altitude,corner,energy and obstacle threat,to improve the effectiveness of the trajectory planning model.And combined with the path optimization algorithm,an Aquila optimizer(IEP-AO)based on the interference-enhanced combination model is proposed,which can help UAVs to improve the trajectory search capability in complex operation space and large-scale operation tasks,and jump out of the locally optimal trajectory path region timely,to generate the optimal trajectory planning plan that can adapt to the diversity of the tasks and the flight efficiency.Meanwhile,four simulated flights with different operation scales and different scene constraints were conducted under the constructed real 3Dimension scene,and the experimental results can show that the proposedmulti-task trajectory planning method canmeet themulti-task requirements in stereoscopic agriculture and improve the mission execution efficiency and agricultural production effect of UAV.