The Steiner k-eccentricity of a vertex is the maximum Steiner distance over all k-sets each of which contains the given vertex,where the Steiner distance of a vertex set is the size of a minimum Steiner tree on this s...The Steiner k-eccentricity of a vertex is the maximum Steiner distance over all k-sets each of which contains the given vertex,where the Steiner distance of a vertex set is the size of a minimum Steiner tree on this set.Since the minimum Steiner tree problem is well-known NP-hard,the Steiner k-eccentricity is not so easy to compute.This paper attempts to efficiently solve this problem on block graphs and general graphs with limited cycles.A block graph is a graph in which each block is a clique,and is also called a clique-tree.On block graphs,we propose an O(k(n+m))-time algorithm to compute the Steiner k-eccentricity of a vertex where n and m are respectively the order and size of a block graph.On general graphs with limited cycles,we take the cyclomatic numberν(G)as a parameter which is the minimum number of edges of G whose removal makes G acyclic,and devise an O(n^(ν(G)+1)(n(G)+m(G)+k))-time algorithm.展开更多
Optimization is the key to obtaining efficient utilization of resources in structural design.Due to the complex nature of truss systems,this study presents a method based on metaheuristic modelling that minimises stru...Optimization is the key to obtaining efficient utilization of resources in structural design.Due to the complex nature of truss systems,this study presents a method based on metaheuristic modelling that minimises structural weight under stress and frequency constraints.Two new algorithms,the Red Kite Optimization Algorithm(ROA)and Secretary Bird Optimization Algorithm(SBOA),are utilized on five benchmark trusses with 10,18,37,72,and 200-bar trusses.Both algorithms are evaluated against benchmarks in the literature.The results indicate that SBOA always reaches a lighter optimal.Designs with reducing structural weight ranging from 0.02%to 0.15%compared to ROA,and up to 6%–8%as compared to conventional algorithms.In addition,SBOA can achieve 15%–20%faster convergence speed and 10%–18%reduction in computational time with a smaller standard deviation over independent runs,which demonstrates its robustness and reliability.It is indicated that the adaptive exploration mechanism of SBOA,especially its Levy flight–based search strategy,can obviously improve optimization performance for low-and high-dimensional trusses.The research has implications in the context of promoting bio-inspired optimization techniques by demonstrating the viability of SBOA,a reliable model for large-scale structural design that provides significant enhancements in performance and convergence behavior.展开更多
We study the split common solution problem with multiple output sets for monotone operator equations in Hilbert spaces.To solve this problem,we propose two new parallel algorithms.We establish a weak convergence theor...We study the split common solution problem with multiple output sets for monotone operator equations in Hilbert spaces.To solve this problem,we propose two new parallel algorithms.We establish a weak convergence theorem for the first and a strong convergence theorem for the second.展开更多
In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic h...In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.展开更多
To address the issue of abnormal energy consumption fluctuations in the converter steelmaking process,an integrated diagnostic method combining the gray wolf optimization(GWO)algorithm,support vector machine(SVM),and ...To address the issue of abnormal energy consumption fluctuations in the converter steelmaking process,an integrated diagnostic method combining the gray wolf optimization(GWO)algorithm,support vector machine(SVM),and K-means clustering was proposed.Eight input parameters—derived from molten iron conditions and external factors—were selected as feature variables.A GWO-SVM model was developed to accurately predict the energy consumption of individual heats.Based on the prediction results,the mean absolute percentage error and maximum relative error of the test set were employed as criteria to identify heats with abnormal energy usage.For these heats,the K-means clustering algorithm was used to determine benchmark values of influencing factors from similar steel grades,enabling root-cause diagnosis of excessive energy consumption.The proposed method was applied to real production data from a converter in a steel plant.The analysis reveals that heat sample No.44 exhibits abnormal energy consumption,due to gas recovery being 1430.28 kg of standard coal below the benchmark level.A secondary contributing factor is a steam recovery shortfall of 237.99 kg of standard coal.This integrated approach offers a scientifically grounded tool for energy management in converter operations and provides valuable guidance for optimizing process parameters and enhancing energy efficiency.展开更多
In the context of rural revitalization and the development of smart agriculture, image classification technology based on deep learning has emerged as a crucial tool for digital monitoring and intelligent prevention a...In the context of rural revitalization and the development of smart agriculture, image classification technology based on deep learning has emerged as a crucial tool for digital monitoring and intelligent prevention and control of agricultural diseases. This paper provides a systematic review of the evolutionary development of algorithms within this field. Addressing challenges such as domain drift and limited global awareness in classical convolutional neural networks (CNNs) applied to complex agricultural environments, the paper focuses on the latest advancements in vision transformers (ViT) and their hybrid architectures to enhance cross-domain robustness and fine-grained recognition capabilities. In response to the challenges posed by scarce long-tail data and limited edge computing power in real-world scenarios, the paper explores solutions related to few-shot learning and ultra-lightweight network deployment. Finally, a forward-looking analysis is presented on the application paradigms of multimodal feature fusion, vision-based large models, and explainable artificial intelligence (AI) within smart plant protection. This analysis aims to offer theoretical insights for the development of efficient and transparent intelligent diagnostic systems for agricultural diseases, thereby supporting the advancement of digital agriculture and the construction of a robust agricultural nation.展开更多
Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting...Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach.展开更多
Concrete-filled steel tubes(CFST)are widely utilized in civil engineering due to their superior load-bearing capacity,ductility,and seismic resistance.However,existing design codes,such as AISC and Eurocode 4,tend to ...Concrete-filled steel tubes(CFST)are widely utilized in civil engineering due to their superior load-bearing capacity,ductility,and seismic resistance.However,existing design codes,such as AISC and Eurocode 4,tend to be excessively conservative as they fail to account for the composite action between the steel tube and the concrete core.To address this limitation,this study proposes a hybrid model that integrates XGBoost with the Pied Kingfisher Optimizer(PKO),a nature-inspired algorithm,to enhance the accuracy of shear strength prediction for CFST columns.Additionally,quantile regression is employed to construct prediction intervals for the ultimate shear force,while the Asymmetric Squared Error Loss(ASEL)function is incorporated to mitigate overestimation errors.The computational results demonstrate that the PKO-XGBoost model delivers superior predictive accuracy,achieving a Mean Absolute Percentage Error(MAPE)of 4.431%and R2 of 0.9925 on the test set.Furthermore,the ASEL-PKO-XGBoost model substantially reduces overestimation errors to 28.26%,with negligible impact on predictive performance.Additionally,based on the Genetic Algorithm(GA)and existing equation models,a strength equation model is developed,achieving markedly higher accuracy than existing models(R^(2)=0.934).Lastly,web-based Graphical User Interfaces(GUIs)were developed to enable real-time prediction.展开更多
The cemented tailings backfill(CTB)with initial defects is more prone to destabilization damage under the influence of various unfavorable factors during the mining process.In order to investigate its influence on the...The cemented tailings backfill(CTB)with initial defects is more prone to destabilization damage under the influence of various unfavorable factors during the mining process.In order to investigate its influence on the stability of underground mining engineering,this paper simulates the generation of different degrees of initial defects inside the CTB by adding different contents of air-entraining agent(AEA),investigates the acoustic emission RA/AF eigenvalues of CTB with different contents of AEA under uniaxial compression,and adopts various denoising algorithms(e.g.,moving average smoothing,median filtering,and outlier detection)to improve the accuracy of the data.The variance and autocorrelation coefficients of RA/AF parameters were analyzed in conjunction with the critical slowing down(CSD)theory.The results show that the acoustic emission RA/AF values can be used to characterize the progressive damage evolution of CTB.The denoising algorithm processed the AE signals to reduce the effects of extraneous noise and anomalous spikes.Changes in the variance curves provide clear precursor information,while abrupt changes in the autocorrelation coefficient can be used as an auxiliary localization warning signal.The phenomenon of dramatic increase in the variance and autocorrelation coefficient curves during the compression-tightening stage,which is influenced by the initial defects,can lead to false warnings.As the initial defects of the CTB increase,its instability precursor time and instability time are prolonged,the peak stress decreases,and the time difference between the CTB and the instability damage is smaller.The results provide a new method for real-time monitoring and early warning of CTB instability damage.展开更多
Optimizing convolutional neural networks(CNNs)for IoT attack detection remains a critical yet challenging task due to the need to balance multiple performance metrics beyond mere accuracy.This study proposes a unified...Optimizing convolutional neural networks(CNNs)for IoT attack detection remains a critical yet challenging task due to the need to balance multiple performance metrics beyond mere accuracy.This study proposes a unified and flexible optimization framework that leverages metaheuristic algorithms to automatically optimize CNN configurations for IoT attack detection.Unlike conventional single-objective approaches,the proposed method formulates a global multi-objective fitness function that integrates accuracy,precision,recall,and model size(speed/model complexity penalty)with adjustable weights.This design enables both single-objective and weightedsum multi-objective optimization,allowing adaptive selection of optimal CNN configurations for diverse deployment requirements.Two representativemetaheuristic algorithms,GeneticAlgorithm(GA)and Particle Swarm Optimization(PSO),are employed to optimize CNNhyperparameters and structure.At each generation/iteration,the best configuration is selected as themost balanced solution across optimization objectives,i.e.,the one achieving themaximum value of the global objective function.Experimental validation on two benchmark datasets,Edge-IIoT and CIC-IoT2023,demonstrates that the proposed GA-and PSO-based models significantly enhance detection accuracy(94.8%–98.3%)and generalization compared with manually tuned CNN configurations,while maintaining compact architectures.The results confirm that the multi-objective framework effectively balances predictive performance and computational efficiency.This work establishes a generalizable and adaptive optimization strategy for deep learning-based IoT attack detection and provides a foundation for future hybrid metaheuristic extensions in broader IoT security applications.展开更多
For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to in...For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.展开更多
Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered so...Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design.展开更多
Lithium-ion batteries are the preferred green energy storage method and are equipped with intelligent battery management systems(BMSs)that efficiently manage the batteries.This not only ensures the safety performance ...Lithium-ion batteries are the preferred green energy storage method and are equipped with intelligent battery management systems(BMSs)that efficiently manage the batteries.This not only ensures the safety performance of the batteries but also significantly improves their efficiency and reduces their damage rate.Throughout their whole life cycle,lithium-ion batteries undergo aging and performance degradation due to diverse external environments and irregular degradation of internal materials.This degradation is reflected in the state of health(SOH)assessment.Therefore,this review offers the first comprehensive analysis of battery SOH estimation strategies across the entire lifecycle over the past five years,highlighting common research focuses rooted in data-driven methods.It delves into various dimensions such as dataset integration and preprocessing,health feature parameter extraction,and the construction of SOH estimation models.These approaches unearth hidden insights within data,addressing the inherent tension between computational complexity and estimation accuracy.To enha nce support for in-vehicle implementation,cloud computing,and the echelon technologies of battery recycling,remanufacturing,and reuse,as well as to offer insights into these technologies,a segmented management approach will be introduced in the future.This will encompass source domain data processing,multi-feature factor reconfiguration,hybrid drive modeling,parameter correction mechanisms,and fulltime health management.Based on the best SOH estimation outcomes,health strategies tailored to different stages can be devised in the future,leading to the establishment of a comprehensive SOH assessment framework.This will mitigate cross-domain distribution disparities and facilitate adaptation to a broader array of dynamic operation protocols.This article reviews the current research landscape from four perspectives and discusses the challenges that lie ahead.Researchers and practitioners can gain a comprehensive understanding of battery SOH estimation methods,offering valuable insights for the development of advanced battery management systems and embedded application research.展开更多
The economic dispatch problem(EDP) of microgrids operating in both grid-connected and isolated modes within an energy internet framework is addressed in this paper. The multi-agent leader-following consensus algorithm...The economic dispatch problem(EDP) of microgrids operating in both grid-connected and isolated modes within an energy internet framework is addressed in this paper. The multi-agent leader-following consensus algorithm is employed to address the EDP of microgrids in grid-connected mode, while the push-pull algorithm with a fixed step size is introduced for the isolated mode. The proposed algorithm of isolated mode is proven to converge to the optimum when the interaction digraph of microgrids is strongly connected. A unified algorithmic framework is proposed to handle the two modes of operation of microgrids simultaneously, enabling our algorithm to achieve optimal power allocation and maintain the balance between power supply and demand in any mode and any mode switching. Due to the push-pull structure of the algorithm and the use of fixed step size,the proposed algorithm can better handle the case of unbalanced graphs, and the convergence speed is improved. It is documented that when the transmission topology is strongly connected and there is bi-directional communication between the energy router and its neighbors, the proposed algorithm in composite mode achieves economic dispatch even with arbitrary mode switching.Finally, we demonstrate the effectiveness and superiority of our algorithm through numerical simulations.展开更多
The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level m...The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level model and technology called Spatial Grasp for dealing with large distributed systems,which can provide spatial vision,awareness,management,control,and even consciousness.The technology description includes its key Spatial Grasp Language(SGL),self-evolution of recursive SGL scenarios,and implementation of SGL interpreter converting distributed networked systems into powerful spatial engines.Examples of typical spatial scenarios in SGL include finding shortest path tree and shortest path between network nodes,collecting proper information throughout the whole world,elimination of multiple targets by intelligent teams of chasers,and withstanding cyber attacks in distributed networked systems.Also this paper compares Spatial Grasp model with traditional algorithms,confirming universality of the former for any spatial systems,while the latter just tools for concrete applications.展开更多
The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficie...The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.展开更多
Quantum computing offers unprecedented computational power, enabling simultaneous computations beyond traditional computers. Quantum computers differ significantly from classical computers, necessitating a distinct ap...Quantum computing offers unprecedented computational power, enabling simultaneous computations beyond traditional computers. Quantum computers differ significantly from classical computers, necessitating a distinct approach to algorithm design, which involves taming quantum mechanical phenomena. This paper extends the numbering of computable programs to be applied in the quantum computing context. Numbering computable programs is a theoretical computer science concept that assigns unique numbers to individual programs or algorithms. Common methods include Gödel numbering which encodes programs as strings of symbols or characters, often used in formal systems and mathematical logic. Based on the proposed numbering approach, this paper presents a mechanism to explore the set of possible quantum algorithms. The proposed approach is able to construct useful circuits such as Quantum Key Distribution BB84 protocol, which enables sender and receiver to establish a secure cryptographic key via a quantum channel. The proposed approach facilitates the process of exploring and constructing quantum algorithms.展开更多
This paper solves the problem of model-free dual-arm space robot maneuvering after non-cooperative target capture under high control quality requirements.The explicit system model is unavailable,and the maneuvering mi...This paper solves the problem of model-free dual-arm space robot maneuvering after non-cooperative target capture under high control quality requirements.The explicit system model is unavailable,and the maneuvering mission is disturbed by the measurement noise and the target adversarial behavior.To address these problems,a model-free Combined Adaptive-length Datadriven Predictive Controller(CADPC)is proposed.It consists of a separated subsystem identification method and a combined predictive control strategy.The subsystem identification method is composed of an adaptive data length,thereby reducing sensitivity to undetermined measurement noises and disturbances.Based on the subsystem identification,the combined predictive controller is established,reducing calculating resource.The stability of the CADPC is rigorously proven using the Input-to-State Stable(ISS)theorem and the small-gain theorem.Simulations demonstrate that CADPC effectively handles the model-free space robot post operation in the presence of significant disturbances,state measurement noise,and control input errors.It achieves improved steady-state accuracy,reduced steady-state control consumption,and minimized control input chattering.展开更多
The advent of microgrids in modern energy systems heralds a promising era of resilience,sustainability,and efficiency.Within the realm of grid-tied microgrids,the selection of an optimal optimization algorithm is crit...The advent of microgrids in modern energy systems heralds a promising era of resilience,sustainability,and efficiency.Within the realm of grid-tied microgrids,the selection of an optimal optimization algorithm is critical for effective energy management,particularly in economic dispatching.This study compares the performance of Particle Swarm Optimization(PSO)and Genetic Algorithms(GA)in microgrid energy management systems,implemented using MATLAB tools.Through a comprehensive review of the literature and sim-ulations conducted in MATLAB,the study analyzes performance metrics,convergence speed,and the overall efficacy of GA and PSO,with a focus on economic dispatching tasks.Notably,a significant distinction emerges between the cost curves generated by the two algo-rithms for microgrid operation,with the PSO algorithm consistently resulting in lower costs due to its effective economic dispatching capabilities.Specifically,the utilization of the PSO approach could potentially lead to substantial savings on the power bill,amounting to approximately$15.30 in this evaluation.Thefindings provide insights into the strengths and limitations of each algorithm within the complex dynamics of grid-tied microgrids,thereby assisting stakeholders and researchers in arriving at informed decisions.This study contributes to the discourse on sustainable energy management by offering actionable guidance for the advancement of grid-tied micro-grid technologies through MATLAB-implemented optimization algorithms.展开更多
This paper presents an Eulerian-Lagrangian algorithm for direct numerical simulation(DNS)of particle-laden flows.The algorithm is applicable to perform simulations of dilute suspensions of small inertial particles in ...This paper presents an Eulerian-Lagrangian algorithm for direct numerical simulation(DNS)of particle-laden flows.The algorithm is applicable to perform simulations of dilute suspensions of small inertial particles in turbulent carrier flow.The Eulerian framework numerically resolves turbulent carrier flow using a parallelized,finite-volume DNS solver on a staggered Cartesian grid.Particles are tracked using a point-particle method utilizing a Lagrangian particle tracking(LPT)algorithm.The proposed Eulerian-Lagrangian algorithm is validated using an inertial particle-laden turbulent channel flow for different Stokes number cases.The particle concentration profiles and higher-order statistics of the carrier and dispersed phases agree well with the benchmark results.We investigated the effect of fluid velocity interpolation and numerical integration schemes of particle tracking algorithms on particle dispersion statistics.The suitability of fluid velocity interpolation schemes for predicting the particle dispersion statistics is discussed in the framework of the particle tracking algorithm coupled to the finite-volume solver.In addition,we present parallelization strategies implemented in the algorithm and evaluate their parallel performance.展开更多
基金Supported by Guizhou Provincial Basic Research Program (Natural Science)(No.ZK[2022]020)。
文摘The Steiner k-eccentricity of a vertex is the maximum Steiner distance over all k-sets each of which contains the given vertex,where the Steiner distance of a vertex set is the size of a minimum Steiner tree on this set.Since the minimum Steiner tree problem is well-known NP-hard,the Steiner k-eccentricity is not so easy to compute.This paper attempts to efficiently solve this problem on block graphs and general graphs with limited cycles.A block graph is a graph in which each block is a clique,and is also called a clique-tree.On block graphs,we propose an O(k(n+m))-time algorithm to compute the Steiner k-eccentricity of a vertex where n and m are respectively the order and size of a block graph.On general graphs with limited cycles,we take the cyclomatic numberν(G)as a parameter which is the minimum number of edges of G whose removal makes G acyclic,and devise an O(n^(ν(G)+1)(n(G)+m(G)+k))-time algorithm.
文摘Optimization is the key to obtaining efficient utilization of resources in structural design.Due to the complex nature of truss systems,this study presents a method based on metaheuristic modelling that minimises structural weight under stress and frequency constraints.Two new algorithms,the Red Kite Optimization Algorithm(ROA)and Secretary Bird Optimization Algorithm(SBOA),are utilized on five benchmark trusses with 10,18,37,72,and 200-bar trusses.Both algorithms are evaluated against benchmarks in the literature.The results indicate that SBOA always reaches a lighter optimal.Designs with reducing structural weight ranging from 0.02%to 0.15%compared to ROA,and up to 6%–8%as compared to conventional algorithms.In addition,SBOA can achieve 15%–20%faster convergence speed and 10%–18%reduction in computational time with a smaller standard deviation over independent runs,which demonstrates its robustness and reliability.It is indicated that the adaptive exploration mechanism of SBOA,especially its Levy flight–based search strategy,can obviously improve optimization performance for low-and high-dimensional trusses.The research has implications in the context of promoting bio-inspired optimization techniques by demonstrating the viability of SBOA,a reliable model for large-scale structural design that provides significant enhancements in performance and convergence behavior.
基金supported by the Science and Technology Fund of TNU-Thai Nguyen University of Science.
文摘We study the split common solution problem with multiple output sets for monotone operator equations in Hilbert spaces.To solve this problem,we propose two new parallel algorithms.We establish a weak convergence theorem for the first and a strong convergence theorem for the second.
基金funding from the European Commission by the Ruralities project(grant agreement no.101060876).
文摘In this paper,we propose a new privacy-aware transmission scheduling algorithm for 6G ad hoc networks.This system enables end nodes to select the optimum time and scheme to transmit private data safely.In 6G dynamic heterogeneous infrastructures,unstable links and non-uniform hardware capabilities create critical issues regarding security and privacy.Traditional protocols are often too computationally heavy to allow 6G services to achieve their expected Quality-of-Service(QoS).As the transport network is built of ad hoc nodes,there is no guarantee about their trustworthiness or behavior,and transversal functionalities are delegated to the extreme nodes.However,while security can be guaranteed in extreme-to-extreme solutions,privacy cannot,as all intermediate nodes still have to handle the data packets they are transporting.Besides,traditional schemes for private anonymous ad hoc communications are vulnerable against modern intelligent attacks based on learning models.The proposed scheme fulfills this gap.Findings show the probability of a successful intelligent attack reduces by up to 65%compared to ad hoc networks with no privacy protection strategy when used the proposed technology.While congestion probability can remain below 0.001%,as required in 6G services.
基金support from the National Key R&D Program of China(Grant No.2020YFB1711100).
文摘To address the issue of abnormal energy consumption fluctuations in the converter steelmaking process,an integrated diagnostic method combining the gray wolf optimization(GWO)algorithm,support vector machine(SVM),and K-means clustering was proposed.Eight input parameters—derived from molten iron conditions and external factors—were selected as feature variables.A GWO-SVM model was developed to accurately predict the energy consumption of individual heats.Based on the prediction results,the mean absolute percentage error and maximum relative error of the test set were employed as criteria to identify heats with abnormal energy usage.For these heats,the K-means clustering algorithm was used to determine benchmark values of influencing factors from similar steel grades,enabling root-cause diagnosis of excessive energy consumption.The proposed method was applied to real production data from a converter in a steel plant.The analysis reveals that heat sample No.44 exhibits abnormal energy consumption,due to gas recovery being 1430.28 kg of standard coal below the benchmark level.A secondary contributing factor is a steam recovery shortfall of 237.99 kg of standard coal.This integrated approach offers a scientifically grounded tool for energy management in converter operations and provides valuable guidance for optimizing process parameters and enhancing energy efficiency.
基金Supported by School-level Project of Shaoyang Industry Polytechnic College(SKY24A06)Science and Technology Plan(Special Fund Subsidy)of Shaoyang City(2024PT4070)General Research Project of Hunan Provincial Department of Education in 2025(25C1457).
文摘In the context of rural revitalization and the development of smart agriculture, image classification technology based on deep learning has emerged as a crucial tool for digital monitoring and intelligent prevention and control of agricultural diseases. This paper provides a systematic review of the evolutionary development of algorithms within this field. Addressing challenges such as domain drift and limited global awareness in classical convolutional neural networks (CNNs) applied to complex agricultural environments, the paper focuses on the latest advancements in vision transformers (ViT) and their hybrid architectures to enhance cross-domain robustness and fine-grained recognition capabilities. In response to the challenges posed by scarce long-tail data and limited edge computing power in real-world scenarios, the paper explores solutions related to few-shot learning and ultra-lightweight network deployment. Finally, a forward-looking analysis is presented on the application paradigms of multimodal feature fusion, vision-based large models, and explainable artificial intelligence (AI) within smart plant protection. This analysis aims to offer theoretical insights for the development of efficient and transparent intelligent diagnostic systems for agricultural diseases, thereby supporting the advancement of digital agriculture and the construction of a robust agricultural nation.
基金National Key Research and Development Program of China,No.2023YFC3006704National Natural Science Foundation of China,No.42171047CAS-CSIRO Partnership Joint Project of 2024,No.177GJHZ2023097MI。
文摘Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach.
基金funded by United Arab Emirates University(UAEU)under the UAEU-AUA grant number G00004577(12N145)with the corresponding grant at Universiti Malaya(UM)under grant number IF019-2024.
文摘Concrete-filled steel tubes(CFST)are widely utilized in civil engineering due to their superior load-bearing capacity,ductility,and seismic resistance.However,existing design codes,such as AISC and Eurocode 4,tend to be excessively conservative as they fail to account for the composite action between the steel tube and the concrete core.To address this limitation,this study proposes a hybrid model that integrates XGBoost with the Pied Kingfisher Optimizer(PKO),a nature-inspired algorithm,to enhance the accuracy of shear strength prediction for CFST columns.Additionally,quantile regression is employed to construct prediction intervals for the ultimate shear force,while the Asymmetric Squared Error Loss(ASEL)function is incorporated to mitigate overestimation errors.The computational results demonstrate that the PKO-XGBoost model delivers superior predictive accuracy,achieving a Mean Absolute Percentage Error(MAPE)of 4.431%and R2 of 0.9925 on the test set.Furthermore,the ASEL-PKO-XGBoost model substantially reduces overestimation errors to 28.26%,with negligible impact on predictive performance.Additionally,based on the Genetic Algorithm(GA)and existing equation models,a strength equation model is developed,achieving markedly higher accuracy than existing models(R^(2)=0.934).Lastly,web-based Graphical User Interfaces(GUIs)were developed to enable real-time prediction.
基金Projects(52374138,51764013)supported by the National Natural Science Foundation of ChinaProject(20204BCJ22005)supported by the Training Plan for Academic and Technical Leaders of Major Disciplines of Jiangxi Province,China+1 种基金Project(2019M652277)supported by the China Postdoctoral Science FoundationProject(20192ACBL21014)supported by the Natural Science Youth Foundation Key Projects of Jiangxi Province,China。
文摘The cemented tailings backfill(CTB)with initial defects is more prone to destabilization damage under the influence of various unfavorable factors during the mining process.In order to investigate its influence on the stability of underground mining engineering,this paper simulates the generation of different degrees of initial defects inside the CTB by adding different contents of air-entraining agent(AEA),investigates the acoustic emission RA/AF eigenvalues of CTB with different contents of AEA under uniaxial compression,and adopts various denoising algorithms(e.g.,moving average smoothing,median filtering,and outlier detection)to improve the accuracy of the data.The variance and autocorrelation coefficients of RA/AF parameters were analyzed in conjunction with the critical slowing down(CSD)theory.The results show that the acoustic emission RA/AF values can be used to characterize the progressive damage evolution of CTB.The denoising algorithm processed the AE signals to reduce the effects of extraneous noise and anomalous spikes.Changes in the variance curves provide clear precursor information,while abrupt changes in the autocorrelation coefficient can be used as an auxiliary localization warning signal.The phenomenon of dramatic increase in the variance and autocorrelation coefficient curves during the compression-tightening stage,which is influenced by the initial defects,can lead to false warnings.As the initial defects of the CTB increase,its instability precursor time and instability time are prolonged,the peak stress decreases,and the time difference between the CTB and the instability damage is smaller.The results provide a new method for real-time monitoring and early warning of CTB instability damage.
文摘Optimizing convolutional neural networks(CNNs)for IoT attack detection remains a critical yet challenging task due to the need to balance multiple performance metrics beyond mere accuracy.This study proposes a unified and flexible optimization framework that leverages metaheuristic algorithms to automatically optimize CNN configurations for IoT attack detection.Unlike conventional single-objective approaches,the proposed method formulates a global multi-objective fitness function that integrates accuracy,precision,recall,and model size(speed/model complexity penalty)with adjustable weights.This design enables both single-objective and weightedsum multi-objective optimization,allowing adaptive selection of optimal CNN configurations for diverse deployment requirements.Two representativemetaheuristic algorithms,GeneticAlgorithm(GA)and Particle Swarm Optimization(PSO),are employed to optimize CNNhyperparameters and structure.At each generation/iteration,the best configuration is selected as themost balanced solution across optimization objectives,i.e.,the one achieving themaximum value of the global objective function.Experimental validation on two benchmark datasets,Edge-IIoT and CIC-IoT2023,demonstrates that the proposed GA-and PSO-based models significantly enhance detection accuracy(94.8%–98.3%)and generalization compared with manually tuned CNN configurations,while maintaining compact architectures.The results confirm that the multi-objective framework effectively balances predictive performance and computational efficiency.This work establishes a generalizable and adaptive optimization strategy for deep learning-based IoT attack detection and provides a foundation for future hybrid metaheuristic extensions in broader IoT security applications.
基金supported by the National Natural Science Foundation of China (62173333, 12271522)Beijing Natural Science Foundation (Z210002)the Research Fund of Renmin University of China (2021030187)。
文摘For unachievable tracking problems, where the system output cannot precisely track a given reference, achieving the best possible approximation for the reference trajectory becomes the objective. This study aims to investigate solutions using the Ptype learning control scheme. Initially, we demonstrate the necessity of gradient information for achieving the best approximation.Subsequently, we propose an input-output-driven learning gain design to handle the imprecise gradients of a class of uncertain systems. However, it is discovered that the desired performance may not be attainable when faced with incomplete information.To address this issue, an extended iterative learning control scheme is introduced. In this scheme, the tracking errors are modified through output data sampling, which incorporates lowmemory footprints and offers flexibility in learning gain design.The input sequence is shown to converge towards the desired input, resulting in an output that is closest to the given reference in the least square sense. Numerical simulations are provided to validate the theoretical findings.
文摘Open caissons are widely used in foundation engineering because of their load-bearing efficiency and adaptability in diverse soil conditions.However,accurately predicting their undrained bearing capacity in layered soils remains a complex challenge.This study presents a novel application of five ensemble machine(ML)algorithms-random forest(RF),gradient boosting machine(GBM),extreme gradient boosting(XGBoost),adaptive boosting(AdaBoost),and categorical boosting(CatBoost)-to predict the undrained bearing capacity factor(Nc)of circular open caissons embedded in two-layered clay on the basis of results from finite element limit analysis(FELA).The input dataset consists of 1188 numerical simulations using the Tresca failure criterion,varying in geometrical and soil parameters.The FELA was performed via OptumG2 software with adaptive meshing techniques and verified against existing benchmark studies.The ML models were trained on 70% of the dataset and tested on the remaining 30%.Their performance was evaluated using six statistical metrics:coefficient of determination(R²),mean absolute error(MAE),root mean squared error(RMSE),index of scatter(IOS),RMSE-to-standard deviation ratio(RSR),and variance explained factor(VAF).The results indicate that all the models achieved high accuracy,with R²values exceeding 97.6%and RMSE values below 0.02.Among them,AdaBoost and CatBoost consistently outperformed the other methods across both the training and testing datasets,demonstrating superior generalizability and robustness.The proposed ML framework offers an efficient,accurate,and data-driven alternative to traditional methods for estimating caisson capacity in stratified soils.This approach can aid in reducing computational costs while improving reliability in the early stages of foundation design.
基金supported by the National Natural Science Foundation of China (No.62173281,52377217,U23A20651)Sichuan Science and Technology Program (No.24NSFSC0024,23ZDYF0734,23NSFSC1436)+2 种基金Dazhou City School Cooperation Project (No.DZXQHZ006)Technopole Talent Summit Project (No.KJCRCFH08)Robert Gordon University。
文摘Lithium-ion batteries are the preferred green energy storage method and are equipped with intelligent battery management systems(BMSs)that efficiently manage the batteries.This not only ensures the safety performance of the batteries but also significantly improves their efficiency and reduces their damage rate.Throughout their whole life cycle,lithium-ion batteries undergo aging and performance degradation due to diverse external environments and irregular degradation of internal materials.This degradation is reflected in the state of health(SOH)assessment.Therefore,this review offers the first comprehensive analysis of battery SOH estimation strategies across the entire lifecycle over the past five years,highlighting common research focuses rooted in data-driven methods.It delves into various dimensions such as dataset integration and preprocessing,health feature parameter extraction,and the construction of SOH estimation models.These approaches unearth hidden insights within data,addressing the inherent tension between computational complexity and estimation accuracy.To enha nce support for in-vehicle implementation,cloud computing,and the echelon technologies of battery recycling,remanufacturing,and reuse,as well as to offer insights into these technologies,a segmented management approach will be introduced in the future.This will encompass source domain data processing,multi-feature factor reconfiguration,hybrid drive modeling,parameter correction mechanisms,and fulltime health management.Based on the best SOH estimation outcomes,health strategies tailored to different stages can be devised in the future,leading to the establishment of a comprehensive SOH assessment framework.This will mitigate cross-domain distribution disparities and facilitate adaptation to a broader array of dynamic operation protocols.This article reviews the current research landscape from four perspectives and discusses the challenges that lie ahead.Researchers and practitioners can gain a comprehensive understanding of battery SOH estimation methods,offering valuable insights for the development of advanced battery management systems and embedded application research.
基金supported by the National Natural Science Foundation of China(62103203)
文摘The economic dispatch problem(EDP) of microgrids operating in both grid-connected and isolated modes within an energy internet framework is addressed in this paper. The multi-agent leader-following consensus algorithm is employed to address the EDP of microgrids in grid-connected mode, while the push-pull algorithm with a fixed step size is introduced for the isolated mode. The proposed algorithm of isolated mode is proven to converge to the optimum when the interaction digraph of microgrids is strongly connected. A unified algorithmic framework is proposed to handle the two modes of operation of microgrids simultaneously, enabling our algorithm to achieve optimal power allocation and maintain the balance between power supply and demand in any mode and any mode switching. Due to the push-pull structure of the algorithm and the use of fixed step size,the proposed algorithm can better handle the case of unbalanced graphs, and the convergence speed is improved. It is documented that when the transmission topology is strongly connected and there is bi-directional communication between the energy router and its neighbors, the proposed algorithm in composite mode achieves economic dispatch even with arbitrary mode switching.Finally, we demonstrate the effectiveness and superiority of our algorithm through numerical simulations.
文摘The word“spatial”fundamentally relates to human existence,evolution,and activity in terrestrial and even celestial spaces.After reviewing the spatial features of many areas,the paper describes basics of high level model and technology called Spatial Grasp for dealing with large distributed systems,which can provide spatial vision,awareness,management,control,and even consciousness.The technology description includes its key Spatial Grasp Language(SGL),self-evolution of recursive SGL scenarios,and implementation of SGL interpreter converting distributed networked systems into powerful spatial engines.Examples of typical spatial scenarios in SGL include finding shortest path tree and shortest path between network nodes,collecting proper information throughout the whole world,elimination of multiple targets by intelligent teams of chasers,and withstanding cyber attacks in distributed networked systems.Also this paper compares Spatial Grasp model with traditional algorithms,confirming universality of the former for any spatial systems,while the latter just tools for concrete applications.
基金supported by the National Key Research and Development Program of China(2023YFB3307801)the National Natural Science Foundation of China(62394343,62373155,62073142)+3 种基金Major Science and Technology Project of Xinjiang(No.2022A01006-4)the Programme of Introducing Talents of Discipline to Universities(the 111 Project)under Grant B17017the Fundamental Research Funds for the Central Universities,Science Foundation of China University of Petroleum,Beijing(No.2462024YJRC011)the Open Research Project of the State Key Laboratory of Industrial Control Technology,China(Grant No.ICT2024B70).
文摘The distillation process is an important chemical process,and the application of data-driven modelling approach has the potential to reduce model complexity compared to mechanistic modelling,thus improving the efficiency of process optimization or monitoring studies.However,the distillation process is highly nonlinear and has multiple uncertainty perturbation intervals,which brings challenges to accurate data-driven modelling of distillation processes.This paper proposes a systematic data-driven modelling framework to solve these problems.Firstly,data segment variance was introduced into the K-means algorithm to form K-means data interval(KMDI)clustering in order to cluster the data into perturbed and steady state intervals for steady-state data extraction.Secondly,maximal information coefficient(MIC)was employed to calculate the nonlinear correlation between variables for removing redundant features.Finally,extreme gradient boosting(XGBoost)was integrated as the basic learner into adaptive boosting(AdaBoost)with the error threshold(ET)set to improve weights update strategy to construct the new integrated learning algorithm,XGBoost-AdaBoost-ET.The superiority of the proposed framework is verified by applying this data-driven modelling framework to a real industrial process of propylene distillation.
文摘Quantum computing offers unprecedented computational power, enabling simultaneous computations beyond traditional computers. Quantum computers differ significantly from classical computers, necessitating a distinct approach to algorithm design, which involves taming quantum mechanical phenomena. This paper extends the numbering of computable programs to be applied in the quantum computing context. Numbering computable programs is a theoretical computer science concept that assigns unique numbers to individual programs or algorithms. Common methods include Gödel numbering which encodes programs as strings of symbols or characters, often used in formal systems and mathematical logic. Based on the proposed numbering approach, this paper presents a mechanism to explore the set of possible quantum algorithms. The proposed approach is able to construct useful circuits such as Quantum Key Distribution BB84 protocol, which enables sender and receiver to establish a secure cryptographic key via a quantum channel. The proposed approach facilitates the process of exploring and constructing quantum algorithms.
基金supported by the National Natural Science Foundation of China(No.12372045)the National Key Research and the Development Program of China(Nos.2023YFC2205900,2023YFC2205901)。
文摘This paper solves the problem of model-free dual-arm space robot maneuvering after non-cooperative target capture under high control quality requirements.The explicit system model is unavailable,and the maneuvering mission is disturbed by the measurement noise and the target adversarial behavior.To address these problems,a model-free Combined Adaptive-length Datadriven Predictive Controller(CADPC)is proposed.It consists of a separated subsystem identification method and a combined predictive control strategy.The subsystem identification method is composed of an adaptive data length,thereby reducing sensitivity to undetermined measurement noises and disturbances.Based on the subsystem identification,the combined predictive controller is established,reducing calculating resource.The stability of the CADPC is rigorously proven using the Input-to-State Stable(ISS)theorem and the small-gain theorem.Simulations demonstrate that CADPC effectively handles the model-free space robot post operation in the presence of significant disturbances,state measurement noise,and control input errors.It achieves improved steady-state accuracy,reduced steady-state control consumption,and minimized control input chattering.
文摘The advent of microgrids in modern energy systems heralds a promising era of resilience,sustainability,and efficiency.Within the realm of grid-tied microgrids,the selection of an optimal optimization algorithm is critical for effective energy management,particularly in economic dispatching.This study compares the performance of Particle Swarm Optimization(PSO)and Genetic Algorithms(GA)in microgrid energy management systems,implemented using MATLAB tools.Through a comprehensive review of the literature and sim-ulations conducted in MATLAB,the study analyzes performance metrics,convergence speed,and the overall efficacy of GA and PSO,with a focus on economic dispatching tasks.Notably,a significant distinction emerges between the cost curves generated by the two algo-rithms for microgrid operation,with the PSO algorithm consistently resulting in lower costs due to its effective economic dispatching capabilities.Specifically,the utilization of the PSO approach could potentially lead to substantial savings on the power bill,amounting to approximately$15.30 in this evaluation.Thefindings provide insights into the strengths and limitations of each algorithm within the complex dynamics of grid-tied microgrids,thereby assisting stakeholders and researchers in arriving at informed decisions.This study contributes to the discourse on sustainable energy management by offering actionable guidance for the advancement of grid-tied micro-grid technologies through MATLAB-implemented optimization algorithms.
基金supported by the P.G.Senapathy Center for Computing Resources at IIT Madrasfunding provided by the Ministry of Education,Government of Indiasupported by the National Natural Science Foundation of China(Grant Nos.12388101,12472224 and 92252104).
文摘This paper presents an Eulerian-Lagrangian algorithm for direct numerical simulation(DNS)of particle-laden flows.The algorithm is applicable to perform simulations of dilute suspensions of small inertial particles in turbulent carrier flow.The Eulerian framework numerically resolves turbulent carrier flow using a parallelized,finite-volume DNS solver on a staggered Cartesian grid.Particles are tracked using a point-particle method utilizing a Lagrangian particle tracking(LPT)algorithm.The proposed Eulerian-Lagrangian algorithm is validated using an inertial particle-laden turbulent channel flow for different Stokes number cases.The particle concentration profiles and higher-order statistics of the carrier and dispersed phases agree well with the benchmark results.We investigated the effect of fluid velocity interpolation and numerical integration schemes of particle tracking algorithms on particle dispersion statistics.The suitability of fluid velocity interpolation schemes for predicting the particle dispersion statistics is discussed in the framework of the particle tracking algorithm coupled to the finite-volume solver.In addition,we present parallelization strategies implemented in the algorithm and evaluate their parallel performance.