With the rapid development of intelligent navigation technology,efficient and safe path planning for mobile robots has become a core requirement.To address the challenges of complex dynamic environments,this paper pro...With the rapid development of intelligent navigation technology,efficient and safe path planning for mobile robots has become a core requirement.To address the challenges of complex dynamic environments,this paper proposes an intelligent path planning framework based on grid map modeling.First,an improved Safe and Smooth A*(SSA*)algorithm is employed for global path planning.By incorporating obstacle expansion and cornerpoint optimization,the proposed SSA*enhances the safety and smoothness of the planned path.Then,a Partitioned Dynamic Window Approach(PDWA)is integrated for local planning,which is triggered when dynamic or sudden static obstacles appear,enabling real-time obstacle avoidance and path adjustment.A unified objective function is constructed,considering path length,safety,and smoothness comprehensively.Multiple simulation experiments are conducted on typical port grid maps.The results demonstrate that the improved SSA*significantly reduces the number of expanded nodes and computation time in static environmentswhile generating smoother and safer paths.Meanwhile,the PDWA exhibits strong real-time performance and robustness in dynamic scenarios,achieving shorter paths and lower planning times compared to other graph search algorithms.The proposedmethodmaintains stable performance across maps of different scales and various port scenarios,verifying its practicality and potential for wider application.展开更多
The cemented tailings backfill(CTB)with initial defects is more prone to destabilization damage under the influence of various unfavorable factors during the mining process.In order to investigate its influence on the...The cemented tailings backfill(CTB)with initial defects is more prone to destabilization damage under the influence of various unfavorable factors during the mining process.In order to investigate its influence on the stability of underground mining engineering,this paper simulates the generation of different degrees of initial defects inside the CTB by adding different contents of air-entraining agent(AEA),investigates the acoustic emission RA/AF eigenvalues of CTB with different contents of AEA under uniaxial compression,and adopts various denoising algorithms(e.g.,moving average smoothing,median filtering,and outlier detection)to improve the accuracy of the data.The variance and autocorrelation coefficients of RA/AF parameters were analyzed in conjunction with the critical slowing down(CSD)theory.The results show that the acoustic emission RA/AF values can be used to characterize the progressive damage evolution of CTB.The denoising algorithm processed the AE signals to reduce the effects of extraneous noise and anomalous spikes.Changes in the variance curves provide clear precursor information,while abrupt changes in the autocorrelation coefficient can be used as an auxiliary localization warning signal.The phenomenon of dramatic increase in the variance and autocorrelation coefficient curves during the compression-tightening stage,which is influenced by the initial defects,can lead to false warnings.As the initial defects of the CTB increase,its instability precursor time and instability time are prolonged,the peak stress decreases,and the time difference between the CTB and the instability damage is smaller.The results provide a new method for real-time monitoring and early warning of CTB instability damage.展开更多
Optimizing convolutional neural networks(CNNs)for IoT attack detection remains a critical yet challenging task due to the need to balance multiple performance metrics beyond mere accuracy.This study proposes a unified...Optimizing convolutional neural networks(CNNs)for IoT attack detection remains a critical yet challenging task due to the need to balance multiple performance metrics beyond mere accuracy.This study proposes a unified and flexible optimization framework that leverages metaheuristic algorithms to automatically optimize CNN configurations for IoT attack detection.Unlike conventional single-objective approaches,the proposed method formulates a global multi-objective fitness function that integrates accuracy,precision,recall,and model size(speed/model complexity penalty)with adjustable weights.This design enables both single-objective and weightedsum multi-objective optimization,allowing adaptive selection of optimal CNN configurations for diverse deployment requirements.Two representativemetaheuristic algorithms,GeneticAlgorithm(GA)and Particle Swarm Optimization(PSO),are employed to optimize CNNhyperparameters and structure.At each generation/iteration,the best configuration is selected as themost balanced solution across optimization objectives,i.e.,the one achieving themaximum value of the global objective function.Experimental validation on two benchmark datasets,Edge-IIoT and CIC-IoT2023,demonstrates that the proposed GA-and PSO-based models significantly enhance detection accuracy(94.8%–98.3%)and generalization compared with manually tuned CNN configurations,while maintaining compact architectures.The results confirm that the multi-objective framework effectively balances predictive performance and computational efficiency.This work establishes a generalizable and adaptive optimization strategy for deep learning-based IoT attack detection and provides a foundation for future hybrid metaheuristic extensions in broader IoT security applications.展开更多
In the current era of intelligent technologies,comprehensive and precise regional coverage path planning is critical for tasks such as environmental monitoring,emergency rescue,and agricultural plant protection.Owing ...In the current era of intelligent technologies,comprehensive and precise regional coverage path planning is critical for tasks such as environmental monitoring,emergency rescue,and agricultural plant protection.Owing to their exceptional flexibility and rapid deployment capabilities,unmanned aerial vehicles(UAVs)have emerged as the ideal platforms for accomplishing these tasks.This study proposes a swarm A^(*)-guided Deep Q-Network(SADQN)algorithm to address the coverage path planning(CPP)problem for UAV swarms in complex environments.Firstly,to overcome the dependency of traditional modeling methods on regular terrain environments,this study proposes an improved cellular decomposition method for map discretization.Simultaneously,a distributed UAV swarm system architecture is adopted,which,through the integration of multi-scale maps,addresses the issues of redundant operations and flight conflicts inmulti-UAV cooperative coverage.Secondly,the heuristic mechanism of the A^(*)algorithmis combinedwith full-coverage path planning,and this approach is incorporated at the initial stage ofDeep Q-Network(DQN)algorithm training to provide effective guidance in action selection,thereby accelerating convergence.Additionally,a prioritized experience replay mechanism is introduced to further enhance the coverage performance of the algorithm.To evaluate the efficacy of the proposed algorithm,simulation experiments were conducted in several irregular environments and compared with several popular algorithms.Simulation results show that the SADQNalgorithmoutperforms othermethods,achieving performance comparable to that of the baseline prior algorithm,with an average coverage efficiency exceeding 2.6 and fewer turning maneuvers.In addition,the algorithm demonstrates excellent generalization ability,enabling it to adapt to different environments.展开更多
The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making co...The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making collision avoidance significantly more challenging than that for space debris.Much existing research focuses on the continuous thrust model,whereas the impulsive maneuver model is more appropriate for long-duration and long-distance avoidance missions.Additionally,it is important to minimize the impact on the original mission while avoiding noncooperative targets.On the other hand,the existing avoidance algorithms are computationally complex and time-consuming especially with the limited computing capability of the on-board computer,posing challenges for practical engineering applications.To conquer these difficulties,this paper makes the following key contributions:(A)a turn-based(sequential decision-making)limited-area impulsive collision avoidance model considering the time delay of precision orbit determination is established for the first time;(B)a novel Selection Probability Learning Adaptive Search-depth Search Tree(SPL-ASST)algorithm is proposed for non-cooperative target avoidance,which improves the decision-making efficiency by introducing an adaptive-search-depth mechanism and a neural network into the traditional Monte Carlo Tree Search(MCTS).Numerical simulations confirm the effectiveness and efficiency of the proposed method.展开更多
The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because o...The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.展开更多
The liquid cooling system(LCS)of fuel cells is challenged by significant time delays,model uncertainties,pump and fan coupling,and frequent disturbances,leading to overshoot and control oscillations that degrade tempe...The liquid cooling system(LCS)of fuel cells is challenged by significant time delays,model uncertainties,pump and fan coupling,and frequent disturbances,leading to overshoot and control oscillations that degrade temperature regulation performance.To address these challenges,we propose a composite control scheme combining fuzzy logic and a variable-gain generalized supertwisting algorithm(VG-GSTA).Firstly,a one-dimensional(1D)fuzzy logic controler(FLC)for the pump ensures stable coolant flow,while a two-dimensional(2D)FLC for the fan regulates the stack temperature near the reference value.The VG-GSTA is then introduced to eliminate steady-state errors,offering resistance to disturbances and minimizing control oscillations.The equilibrium optimizer is used to fine-tune VG-GSTA parameters.Co-simulation verifies the effectiveness of our method,demonstrating its advantages in terms of disturbance immunity,overshoot suppression,tracking accuracy and response speed.展开更多
Quantum computing offers unprecedented computational power, enabling simultaneous computations beyond traditional computers. Quantum computers differ significantly from classical computers, necessitating a distinct ap...Quantum computing offers unprecedented computational power, enabling simultaneous computations beyond traditional computers. Quantum computers differ significantly from classical computers, necessitating a distinct approach to algorithm design, which involves taming quantum mechanical phenomena. This paper extends the numbering of computable programs to be applied in the quantum computing context. Numbering computable programs is a theoretical computer science concept that assigns unique numbers to individual programs or algorithms. Common methods include Gödel numbering which encodes programs as strings of symbols or characters, often used in formal systems and mathematical logic. Based on the proposed numbering approach, this paper presents a mechanism to explore the set of possible quantum algorithms. The proposed approach is able to construct useful circuits such as Quantum Key Distribution BB84 protocol, which enables sender and receiver to establish a secure cryptographic key via a quantum channel. The proposed approach facilitates the process of exploring and constructing quantum algorithms.展开更多
This paper presents an Eulerian-Lagrangian algorithm for direct numerical simulation(DNS)of particle-laden flows.The algorithm is applicable to perform simulations of dilute suspensions of small inertial particles in ...This paper presents an Eulerian-Lagrangian algorithm for direct numerical simulation(DNS)of particle-laden flows.The algorithm is applicable to perform simulations of dilute suspensions of small inertial particles in turbulent carrier flow.The Eulerian framework numerically resolves turbulent carrier flow using a parallelized,finite-volume DNS solver on a staggered Cartesian grid.Particles are tracked using a point-particle method utilizing a Lagrangian particle tracking(LPT)algorithm.The proposed Eulerian-Lagrangian algorithm is validated using an inertial particle-laden turbulent channel flow for different Stokes number cases.The particle concentration profiles and higher-order statistics of the carrier and dispersed phases agree well with the benchmark results.We investigated the effect of fluid velocity interpolation and numerical integration schemes of particle tracking algorithms on particle dispersion statistics.The suitability of fluid velocity interpolation schemes for predicting the particle dispersion statistics is discussed in the framework of the particle tracking algorithm coupled to the finite-volume solver.In addition,we present parallelization strategies implemented in the algorithm and evaluate their parallel performance.展开更多
This study presents a novel hybrid topology optimization and mold design framework that integrates process fitting,runner system optimization,and structural analysis to significantly enhance the performance of injecti...This study presents a novel hybrid topology optimization and mold design framework that integrates process fitting,runner system optimization,and structural analysis to significantly enhance the performance of injection-molded parts.At its core,the framework employs a greedy algorithm that generates runner systems based on adjacency and shortest path principles,leading to improvements in both mechanical strength and material efficiency.The design optimization is validated through a series of rigorous experimental tests,including three-point bending and torsion tests performed on key-socket frames,ensuring that the optimized designs meet practical performance requirements.A critical innovation of the framework is the development of the Adjacent Element Temperature-Driven Prestress Algorithm(AETDPA),which refines the prediction of mechanical failure and strength fitting.This algorithm has been shown to deliver mesh-independent accuracy,thereby enhancing the reliability of simulation results across various design iterations.The framework’s adaptability is further demonstrated by its ability to adjust optimization methods based on the unique geometry of each part,thus accelerating the overall design process while ensuring struc-tural integrity.In addition to its immediate applications in injection molding,the study explores the potential extension of this framework to metal additive manufacturing,opening new avenues for its use in advanced manufacturing technologies.Numerical simulations,including finite element analysis,support the experimental findings and confirm that the optimized designs provide a balanced combination of strength,durability,and efficiency.Furthermore,the integration challenges with existing injection molding practices are addressed,underscoring the framework’s scalability and industrial relevance.Overall,this hybrid topology optimization framework offers a computationally efficient and robust solution for advanced manufacturing applications,promising significant improvements in design efficiency,cost-effectiveness,and product performance.Future work will focus on further enhancing algorithm robustness and exploring additional applications across diverse manufacturing processes.展开更多
Aiming to solve the steering instability and hysteresis of agricultural robots in the process of movement,a fusion PID control method of particle swarm optimization(PSO)and genetic algorithm(GA)was proposed.The fusion...Aiming to solve the steering instability and hysteresis of agricultural robots in the process of movement,a fusion PID control method of particle swarm optimization(PSO)and genetic algorithm(GA)was proposed.The fusion algorithm took advantage of the fast optimization ability of PSO to optimize the population screening link of GA.The Simulink simulation results showed that the convergence of the fitness function of the fusion algorithm was accelerated,the system response adjustment time was reduced,and the overshoot was almost zero.Then the algorithm was applied to the steering test of agricultural robot in various scenes.After modeling the steering system of agricultural robot,the steering test results in the unloaded suspended state showed that the PID control based on fusion algorithm reduced the rise time,response adjustment time and overshoot of the system,and improved the response speed and stability of the system,compared with the artificial trial and error PID control and the PID control based on GA.The actual road steering test results showed that the PID control response rise time based on the fusion algorithm was the shortest,about 4.43 s.When the target pulse number was set to 100,the actual mean value in the steady-state regulation stage was about 102.9,which was the closest to the target value among the three control methods,and the overshoot was reduced at the same time.The steering test results under various scene states showed that the PID control based on the proposed fusion algorithm had good anti-interference ability,it can adapt to the changes of environment and load and improve the performance of the control system.It was effective in the steering control of agricultural robot.This method can provide a reference for the precise steering control of other robots.展开更多
We explored the effects of algorithmic opacity on employees’playing dumb and evasive hiding rather than rationalized hiding.We examined the mediating role of job insecurity and the moderating role of employee-AI coll...We explored the effects of algorithmic opacity on employees’playing dumb and evasive hiding rather than rationalized hiding.We examined the mediating role of job insecurity and the moderating role of employee-AI collaboration.Participants were 421 full-time employees(female=46.32%,junior employees=31.83%)from a variety of organizations and industries that interact with AI.Employees filled out data on algorithm opacity,job insecurity,knowledge hiding,employee-AI collaboration,and control variables.The results of the structural equation modeling indicated that algorithm opacity exacerbated employees’job insecurity,and job insecurity mediated between algorithm opacity and playing dumb and evasive hiding rather than rationalized hiding.The relationship between algorithmic opacity and playing dumb and evasive hiding was more positive when the level of employee-AI collaboration was higher.These findings suggest that employee-AI collaboration reinforces the indirect relationship between algorithmic opacity and playing dumb and evasive hiding.Our study contributes to research on human and AI collaboration by exploring the dark side of employee-AI collaboration.展开更多
The application of machine learning was investigated for predicting end-point temperature in the basic oxygen furnace steelmaking process,addressing gaps in the field,particularly large-scale dataset sizes and the und...The application of machine learning was investigated for predicting end-point temperature in the basic oxygen furnace steelmaking process,addressing gaps in the field,particularly large-scale dataset sizes and the underutilization of boosting algorithms.Utilizing a substantial dataset containing over 20,000 heats,significantly bigger than those in previous studies,a comprehensive evaluation of five advanced machine learning models was conducted.These include four ensemble learning algorithms:XGBoost,LightGBM,CatBoost(three boosting algorithms),along with random forest(a bagging algorithm),as well as a neural network model,namely the multilayer perceptron.Our comparative analysis reveals that Bayesian-optimized boosting models demonstrate exceptional robustness and accuracy,achieving the highest R-squared values,the lowest root mean square error,and lowest mean absolute error,along with the best hit ratio.CatBoost exhibited superior performance,with its test R-squared improving by 4.2%compared to that of the random forest and by 0.8%compared to that of the multilayer perceptron.This highlights the efficacy of boosting algorithms in refining complex industrial processes.Additionally,our investigation into the impact of varying dataset sizes,ranging from 500 to 20,000 heats,on model accuracy underscores the importance of leveraging larger-scale datasets to improve the accuracy and stability of predictive models.展开更多
Satellite Internet(SI)provides broadband access as a critical information infrastructure in 6G.However,with the integration of the terrestrial Internet,the influx of massive terrestrial traffic will bring significant ...Satellite Internet(SI)provides broadband access as a critical information infrastructure in 6G.However,with the integration of the terrestrial Internet,the influx of massive terrestrial traffic will bring significant threats to SI,among which DDoS attack will intensify the erosion of limited bandwidth resources.Therefore,this paper proposes a DDoS attack tracking scheme using a multi-round iterative Viterbi algorithm to achieve high-accuracy attack path reconstruction and fast internal source locking,protecting SI from the source.Firstly,to reduce communication overhead,the logarithmic representation of the traffic volume is added to the digests after modeling SI,generating the lightweight deviation degree to construct the observation probability matrix for the Viterbi algorithm.Secondly,the path node matrix is expanded to multi-index matrices in the Viterbi algorithm to store index information for all probability values,deriving the path with non-repeatability and maximum probability.Finally,multiple rounds of iterative Viterbi tracking are performed locally to track DDoS attack based on trimming tracking results.Simulation and experimental results show that the scheme can achieve 96.8%tracking accuracy of external and internal DDoS attack at 2.5 seconds,with the communication overhead at 268KB/s,effectively protecting the limited bandwidth resources of SI.展开更多
Cooperative task assignment is one of the key research focuses in the field of unmanned aerial vehicles(UAVs). In this paper, an energy learning hyper-heuristic(EL-HH) algorithm is proposed to address the cooperative ...Cooperative task assignment is one of the key research focuses in the field of unmanned aerial vehicles(UAVs). In this paper, an energy learning hyper-heuristic(EL-HH) algorithm is proposed to address the cooperative task assignment problem of heterogeneous UAVs under complex constraints. First, a mathematical model is designed to define the scenario, complex constraints, and objective function of the problem. Then, the scheme encoding, the EL-HH strategy, multiple optimization operators, and the task sequence and time adjustment strategies are designed in the EL-HH algorithm. The scheme encoding is designed with three layers: task sequence, UAV sequence, and waiting time. The EL-HH strategy applies an energy learning method to adaptively adjust the energies of operators, thereby facilitating the selection and application of operators. Multiple optimization operators can update schemes in different ways, enabling the algorithm to fully explore the solution space. Afterward, the task order and time adjustment strategies are designed to adjust task order and insert waiting time. Through the iterative optimization process, a satisfactory assignment scheme is ultimately produced. Finally, simulation and experiment verify the effectiveness of the proposed algorithm.展开更多
In this paper,we focus on the recovery of piecewise sparse signals containing both fast-decaying and slow-decaying nonzero entries.In order to improve the performance of classic Orthogonal Matching Pursuit(OMP)and Gen...In this paper,we focus on the recovery of piecewise sparse signals containing both fast-decaying and slow-decaying nonzero entries.In order to improve the performance of classic Orthogonal Matching Pursuit(OMP)and Generalized Orthogonal Matching Pursuit(GOMP)algorithms for solving this problem,we propose the Piecewise Generalized Orthogonal Matching Pursuit(PGOMP)algorithm,by considering the mixed-decaying sparse signals as piecewise sparse signals with two components containing nonzero entries with different decay factors.The algorithm incorporates piecewise selection and deletion to retain the most significant entries according to the sparsity of each component.We provide a theoretical analysis based on the mutual coherence of the measurement matrix and the decay factors of the nonzero entries,establishing a sufficient condition for the PGOMP algorithm to select at least two correct indices in each iteration.Numerical simulations and an image decomposition experiment demonstrate that the proposed algorithm significantly improves the support recovery probability by effectively matching piecewise sparsity with decay factors.展开更多
This study proposes a system for biometric access control utilising the improved Cultural Chicken Swarm Optimization(CCSO)technique.This approach mitigates the limitations of conventional Chicken Swarm Optimization(CS...This study proposes a system for biometric access control utilising the improved Cultural Chicken Swarm Optimization(CCSO)technique.This approach mitigates the limitations of conventional Chicken Swarm Optimization(CSO),especially in dealing with larger dimensions due to diversity loss during solution space exploration.Our experimentation involved 600 sample images encompassing facial,iris,and fingerprint data,collected from 200 students at Ladoke Akintola University of Technology(LAUTECH),Ogbomoso.The results demonstrate the remarkable effectiveness of CCSO,yielding accuracy rates of 90.42%,91.67%,and 91.25%within 54.77,27.35,and 113.92 s for facial,fingerprint,and iris biometrics,respectively.These outcomes significantly outperform those achieved by the conventional CSO technique,which produced accuracy rates of 82.92%,86.25%,and 84.58%at 92.57,63.96,and 163.94 s for the same biometric modalities.The study’s findings reveal that CCSO,through its integration of Cultural Algorithm(CA)Operators into CSO,not only enhances algorithm performance,exhibiting computational efficiency and superior accuracy,but also carries broader implications beyond biometric systems.This innovation offers practical benefits in terms of security enhancement,operational efficiency,and adaptability across diverse user populations,shaping more effective and resource-efficient access control systems with real-world applicability.展开更多
Evolutionary algorithms have been extensively utilized in practical applications.However,manually designed population updating formulas are inherently prone to the subjective influence of the designer.Genetic programm...Evolutionary algorithms have been extensively utilized in practical applications.However,manually designed population updating formulas are inherently prone to the subjective influence of the designer.Genetic programming(GP),characterized by its tree-based solution structure,is a widely adopted technique for optimizing the structure of mathematical models tailored to real-world problems.This paper introduces a GP-based framework(GPEAs)for the autonomous generation of update formulas,aiming to reduce human intervention.Partial modifications to tree-based GP have been instigated,encompassing adjustments to its initialization process and fundamental update operations such as crossover and mutation within the algorithm.By designing suitable function sets and terminal sets tailored to the selected evolutionary algorithm,and ultimately derive an improved update formula.The Cat Swarm Optimization Algorithm(CSO)is chosen as a case study,and the GP-EAs is employed to regenerate the speed update formulas of the CSO.To validate the feasibility of the GP-EAs,the comprehensive performance of the enhanced algorithm(GP-CSO)was evaluated on the CEC2017 benchmark suite.Furthermore,GP-CSO is applied to deduce suitable embedding factors,thereby improving the robustness of the digital watermarking process.The experimental results indicate that the update formulas generated through training with GP-EAs possess excellent performance scalability and practical application proficiency.展开更多
This work proposes an optimization method for gas storage operation parameters under multi-factor coupled constraints to improve the peak-shaving capacity of gas storage reservoirs while ensuring operational safety.Pr...This work proposes an optimization method for gas storage operation parameters under multi-factor coupled constraints to improve the peak-shaving capacity of gas storage reservoirs while ensuring operational safety.Previous research primarily focused on integrating reservoir,wellbore,and surface facility constraints,often resulting in broad constraint ranges and slow model convergence.To solve this problem,the present study introduces additional constraints on maximum withdrawal rates by combining binomial deliverability equations with material balance equations for closed gas reservoirs,while considering extreme peak-shaving demands.This approach effectively narrows the constraint range.Subsequently,a collaborative optimization model with maximum gas production as the objective function is established,and the model employs a joint solution strategy combining genetic algorithms and numerical simulation techniques.Finally,this methodology was applied to optimize operational parameters for Gas Storage T.The results demonstrate:(1)The convergence of the model was achieved after 6 iterations,which significantly improved the convergence speed of the model;(2)The maximum working gas volume reached 11.605×10^(8) m^(3),which increased by 13.78%compared with the traditional optimization method;(3)This method greatly improves the operation safety and the ultimate peak load balancing capability.The research provides important technical support for the intelligent decision of injection and production parameters of gas storage and improving peak load balancing ability.展开更多
To extract and display the significant information of combat systems,this paper introduces the methodology of functional cartography into combat networks and proposes an integrated framework named“functional cartogra...To extract and display the significant information of combat systems,this paper introduces the methodology of functional cartography into combat networks and proposes an integrated framework named“functional cartography of heterogeneous combat networks based on the operational chain”(FCBOC).In this framework,a functional module detection algorithm named operational chain-based label propagation algorithm(OCLPA),which considers the cooperation and interactions among combat entities and can thus naturally tackle network heterogeneity,is proposed to identify the functional modules of the network.Then,the nodes and their modules are classified into different roles according to their properties.A case study shows that FCBOC can provide a simplified description of disorderly information of combat networks and enable us to identify their functional and structural network characteristics.The results provide useful information to help commanders make precise and accurate decisions regarding the protection,disintegration or optimization of combat networks.Three algorithms are also compared with OCLPA to show that FCBOC can most effectively find functional modules with practical meaning.展开更多
文摘With the rapid development of intelligent navigation technology,efficient and safe path planning for mobile robots has become a core requirement.To address the challenges of complex dynamic environments,this paper proposes an intelligent path planning framework based on grid map modeling.First,an improved Safe and Smooth A*(SSA*)algorithm is employed for global path planning.By incorporating obstacle expansion and cornerpoint optimization,the proposed SSA*enhances the safety and smoothness of the planned path.Then,a Partitioned Dynamic Window Approach(PDWA)is integrated for local planning,which is triggered when dynamic or sudden static obstacles appear,enabling real-time obstacle avoidance and path adjustment.A unified objective function is constructed,considering path length,safety,and smoothness comprehensively.Multiple simulation experiments are conducted on typical port grid maps.The results demonstrate that the improved SSA*significantly reduces the number of expanded nodes and computation time in static environmentswhile generating smoother and safer paths.Meanwhile,the PDWA exhibits strong real-time performance and robustness in dynamic scenarios,achieving shorter paths and lower planning times compared to other graph search algorithms.The proposedmethodmaintains stable performance across maps of different scales and various port scenarios,verifying its practicality and potential for wider application.
基金Projects(52374138,51764013)supported by the National Natural Science Foundation of ChinaProject(20204BCJ22005)supported by the Training Plan for Academic and Technical Leaders of Major Disciplines of Jiangxi Province,China+1 种基金Project(2019M652277)supported by the China Postdoctoral Science FoundationProject(20192ACBL21014)supported by the Natural Science Youth Foundation Key Projects of Jiangxi Province,China。
文摘The cemented tailings backfill(CTB)with initial defects is more prone to destabilization damage under the influence of various unfavorable factors during the mining process.In order to investigate its influence on the stability of underground mining engineering,this paper simulates the generation of different degrees of initial defects inside the CTB by adding different contents of air-entraining agent(AEA),investigates the acoustic emission RA/AF eigenvalues of CTB with different contents of AEA under uniaxial compression,and adopts various denoising algorithms(e.g.,moving average smoothing,median filtering,and outlier detection)to improve the accuracy of the data.The variance and autocorrelation coefficients of RA/AF parameters were analyzed in conjunction with the critical slowing down(CSD)theory.The results show that the acoustic emission RA/AF values can be used to characterize the progressive damage evolution of CTB.The denoising algorithm processed the AE signals to reduce the effects of extraneous noise and anomalous spikes.Changes in the variance curves provide clear precursor information,while abrupt changes in the autocorrelation coefficient can be used as an auxiliary localization warning signal.The phenomenon of dramatic increase in the variance and autocorrelation coefficient curves during the compression-tightening stage,which is influenced by the initial defects,can lead to false warnings.As the initial defects of the CTB increase,its instability precursor time and instability time are prolonged,the peak stress decreases,and the time difference between the CTB and the instability damage is smaller.The results provide a new method for real-time monitoring and early warning of CTB instability damage.
文摘Optimizing convolutional neural networks(CNNs)for IoT attack detection remains a critical yet challenging task due to the need to balance multiple performance metrics beyond mere accuracy.This study proposes a unified and flexible optimization framework that leverages metaheuristic algorithms to automatically optimize CNN configurations for IoT attack detection.Unlike conventional single-objective approaches,the proposed method formulates a global multi-objective fitness function that integrates accuracy,precision,recall,and model size(speed/model complexity penalty)with adjustable weights.This design enables both single-objective and weightedsum multi-objective optimization,allowing adaptive selection of optimal CNN configurations for diverse deployment requirements.Two representativemetaheuristic algorithms,GeneticAlgorithm(GA)and Particle Swarm Optimization(PSO),are employed to optimize CNNhyperparameters and structure.At each generation/iteration,the best configuration is selected as themost balanced solution across optimization objectives,i.e.,the one achieving themaximum value of the global objective function.Experimental validation on two benchmark datasets,Edge-IIoT and CIC-IoT2023,demonstrates that the proposed GA-and PSO-based models significantly enhance detection accuracy(94.8%–98.3%)and generalization compared with manually tuned CNN configurations,while maintaining compact architectures.The results confirm that the multi-objective framework effectively balances predictive performance and computational efficiency.This work establishes a generalizable and adaptive optimization strategy for deep learning-based IoT attack detection and provides a foundation for future hybrid metaheuristic extensions in broader IoT security applications.
文摘In the current era of intelligent technologies,comprehensive and precise regional coverage path planning is critical for tasks such as environmental monitoring,emergency rescue,and agricultural plant protection.Owing to their exceptional flexibility and rapid deployment capabilities,unmanned aerial vehicles(UAVs)have emerged as the ideal platforms for accomplishing these tasks.This study proposes a swarm A^(*)-guided Deep Q-Network(SADQN)algorithm to address the coverage path planning(CPP)problem for UAV swarms in complex environments.Firstly,to overcome the dependency of traditional modeling methods on regular terrain environments,this study proposes an improved cellular decomposition method for map discretization.Simultaneously,a distributed UAV swarm system architecture is adopted,which,through the integration of multi-scale maps,addresses the issues of redundant operations and flight conflicts inmulti-UAV cooperative coverage.Secondly,the heuristic mechanism of the A^(*)algorithmis combinedwith full-coverage path planning,and this approach is incorporated at the initial stage ofDeep Q-Network(DQN)algorithm training to provide effective guidance in action selection,thereby accelerating convergence.Additionally,a prioritized experience replay mechanism is introduced to further enhance the coverage performance of the algorithm.To evaluate the efficacy of the proposed algorithm,simulation experiments were conducted in several irregular environments and compared with several popular algorithms.Simulation results show that the SADQNalgorithmoutperforms othermethods,achieving performance comparable to that of the baseline prior algorithm,with an average coverage efficiency exceeding 2.6 and fewer turning maneuvers.In addition,the algorithm demonstrates excellent generalization ability,enabling it to adapt to different environments.
基金co-supported by the Foundation of Shanghai Astronautics Science and Technology Innovation,China(No.SAST2022-114)the National Natural Science Foundation of China(No.62303378),the National Natural Science Foundation of China(Nos.124B2031,12202281)the Foundation of China National Key Laboratory of Science and Technology on Test Physics&Numerical Mathematics,China(No.08-YY-2023-R11)。
文摘The problem of collision avoidance for non-cooperative targets has received significant attention from researchers in recent years.Non-cooperative targets exhibit uncertain states and unpredictable behaviors,making collision avoidance significantly more challenging than that for space debris.Much existing research focuses on the continuous thrust model,whereas the impulsive maneuver model is more appropriate for long-duration and long-distance avoidance missions.Additionally,it is important to minimize the impact on the original mission while avoiding noncooperative targets.On the other hand,the existing avoidance algorithms are computationally complex and time-consuming especially with the limited computing capability of the on-board computer,posing challenges for practical engineering applications.To conquer these difficulties,this paper makes the following key contributions:(A)a turn-based(sequential decision-making)limited-area impulsive collision avoidance model considering the time delay of precision orbit determination is established for the first time;(B)a novel Selection Probability Learning Adaptive Search-depth Search Tree(SPL-ASST)algorithm is proposed for non-cooperative target avoidance,which improves the decision-making efficiency by introducing an adaptive-search-depth mechanism and a neural network into the traditional Monte Carlo Tree Search(MCTS).Numerical simulations confirm the effectiveness and efficiency of the proposed method.
基金supported in part by the National Key Research and Development Program of China under Grant No.2021YFF0901300in part by the National Natural Science Foundation of China under Grant Nos.62173076 and 72271048.
文摘The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.
基金Supported by the Major Science and Technology Project of Jilin Province(20220301010GX)the International Scientific and Technological Cooperation(20240402071GH).
文摘The liquid cooling system(LCS)of fuel cells is challenged by significant time delays,model uncertainties,pump and fan coupling,and frequent disturbances,leading to overshoot and control oscillations that degrade temperature regulation performance.To address these challenges,we propose a composite control scheme combining fuzzy logic and a variable-gain generalized supertwisting algorithm(VG-GSTA).Firstly,a one-dimensional(1D)fuzzy logic controler(FLC)for the pump ensures stable coolant flow,while a two-dimensional(2D)FLC for the fan regulates the stack temperature near the reference value.The VG-GSTA is then introduced to eliminate steady-state errors,offering resistance to disturbances and minimizing control oscillations.The equilibrium optimizer is used to fine-tune VG-GSTA parameters.Co-simulation verifies the effectiveness of our method,demonstrating its advantages in terms of disturbance immunity,overshoot suppression,tracking accuracy and response speed.
文摘Quantum computing offers unprecedented computational power, enabling simultaneous computations beyond traditional computers. Quantum computers differ significantly from classical computers, necessitating a distinct approach to algorithm design, which involves taming quantum mechanical phenomena. This paper extends the numbering of computable programs to be applied in the quantum computing context. Numbering computable programs is a theoretical computer science concept that assigns unique numbers to individual programs or algorithms. Common methods include Gödel numbering which encodes programs as strings of symbols or characters, often used in formal systems and mathematical logic. Based on the proposed numbering approach, this paper presents a mechanism to explore the set of possible quantum algorithms. The proposed approach is able to construct useful circuits such as Quantum Key Distribution BB84 protocol, which enables sender and receiver to establish a secure cryptographic key via a quantum channel. The proposed approach facilitates the process of exploring and constructing quantum algorithms.
基金supported by the P.G.Senapathy Center for Computing Resources at IIT Madrasfunding provided by the Ministry of Education,Government of Indiasupported by the National Natural Science Foundation of China(Grant Nos.12388101,12472224 and 92252104).
文摘This paper presents an Eulerian-Lagrangian algorithm for direct numerical simulation(DNS)of particle-laden flows.The algorithm is applicable to perform simulations of dilute suspensions of small inertial particles in turbulent carrier flow.The Eulerian framework numerically resolves turbulent carrier flow using a parallelized,finite-volume DNS solver on a staggered Cartesian grid.Particles are tracked using a point-particle method utilizing a Lagrangian particle tracking(LPT)algorithm.The proposed Eulerian-Lagrangian algorithm is validated using an inertial particle-laden turbulent channel flow for different Stokes number cases.The particle concentration profiles and higher-order statistics of the carrier and dispersed phases agree well with the benchmark results.We investigated the effect of fluid velocity interpolation and numerical integration schemes of particle tracking algorithms on particle dispersion statistics.The suitability of fluid velocity interpolation schemes for predicting the particle dispersion statistics is discussed in the framework of the particle tracking algorithm coupled to the finite-volume solver.In addition,we present parallelization strategies implemented in the algorithm and evaluate their parallel performance.
文摘This study presents a novel hybrid topology optimization and mold design framework that integrates process fitting,runner system optimization,and structural analysis to significantly enhance the performance of injection-molded parts.At its core,the framework employs a greedy algorithm that generates runner systems based on adjacency and shortest path principles,leading to improvements in both mechanical strength and material efficiency.The design optimization is validated through a series of rigorous experimental tests,including three-point bending and torsion tests performed on key-socket frames,ensuring that the optimized designs meet practical performance requirements.A critical innovation of the framework is the development of the Adjacent Element Temperature-Driven Prestress Algorithm(AETDPA),which refines the prediction of mechanical failure and strength fitting.This algorithm has been shown to deliver mesh-independent accuracy,thereby enhancing the reliability of simulation results across various design iterations.The framework’s adaptability is further demonstrated by its ability to adjust optimization methods based on the unique geometry of each part,thus accelerating the overall design process while ensuring struc-tural integrity.In addition to its immediate applications in injection molding,the study explores the potential extension of this framework to metal additive manufacturing,opening new avenues for its use in advanced manufacturing technologies.Numerical simulations,including finite element analysis,support the experimental findings and confirm that the optimized designs provide a balanced combination of strength,durability,and efficiency.Furthermore,the integration challenges with existing injection molding practices are addressed,underscoring the framework’s scalability and industrial relevance.Overall,this hybrid topology optimization framework offers a computationally efficient and robust solution for advanced manufacturing applications,promising significant improvements in design efficiency,cost-effectiveness,and product performance.Future work will focus on further enhancing algorithm robustness and exploring additional applications across diverse manufacturing processes.
文摘Aiming to solve the steering instability and hysteresis of agricultural robots in the process of movement,a fusion PID control method of particle swarm optimization(PSO)and genetic algorithm(GA)was proposed.The fusion algorithm took advantage of the fast optimization ability of PSO to optimize the population screening link of GA.The Simulink simulation results showed that the convergence of the fitness function of the fusion algorithm was accelerated,the system response adjustment time was reduced,and the overshoot was almost zero.Then the algorithm was applied to the steering test of agricultural robot in various scenes.After modeling the steering system of agricultural robot,the steering test results in the unloaded suspended state showed that the PID control based on fusion algorithm reduced the rise time,response adjustment time and overshoot of the system,and improved the response speed and stability of the system,compared with the artificial trial and error PID control and the PID control based on GA.The actual road steering test results showed that the PID control response rise time based on the fusion algorithm was the shortest,about 4.43 s.When the target pulse number was set to 100,the actual mean value in the steady-state regulation stage was about 102.9,which was the closest to the target value among the three control methods,and the overshoot was reduced at the same time.The steering test results under various scene states showed that the PID control based on the proposed fusion algorithm had good anti-interference ability,it can adapt to the changes of environment and load and improve the performance of the control system.It was effective in the steering control of agricultural robot.This method can provide a reference for the precise steering control of other robots.
基金supported by the Social Science Foundation of Liaoning Province(L23BJY022).
文摘We explored the effects of algorithmic opacity on employees’playing dumb and evasive hiding rather than rationalized hiding.We examined the mediating role of job insecurity and the moderating role of employee-AI collaboration.Participants were 421 full-time employees(female=46.32%,junior employees=31.83%)from a variety of organizations and industries that interact with AI.Employees filled out data on algorithm opacity,job insecurity,knowledge hiding,employee-AI collaboration,and control variables.The results of the structural equation modeling indicated that algorithm opacity exacerbated employees’job insecurity,and job insecurity mediated between algorithm opacity and playing dumb and evasive hiding rather than rationalized hiding.The relationship between algorithmic opacity and playing dumb and evasive hiding was more positive when the level of employee-AI collaboration was higher.These findings suggest that employee-AI collaboration reinforces the indirect relationship between algorithmic opacity and playing dumb and evasive hiding.Our study contributes to research on human and AI collaboration by exploring the dark side of employee-AI collaboration.
文摘The application of machine learning was investigated for predicting end-point temperature in the basic oxygen furnace steelmaking process,addressing gaps in the field,particularly large-scale dataset sizes and the underutilization of boosting algorithms.Utilizing a substantial dataset containing over 20,000 heats,significantly bigger than those in previous studies,a comprehensive evaluation of five advanced machine learning models was conducted.These include four ensemble learning algorithms:XGBoost,LightGBM,CatBoost(three boosting algorithms),along with random forest(a bagging algorithm),as well as a neural network model,namely the multilayer perceptron.Our comparative analysis reveals that Bayesian-optimized boosting models demonstrate exceptional robustness and accuracy,achieving the highest R-squared values,the lowest root mean square error,and lowest mean absolute error,along with the best hit ratio.CatBoost exhibited superior performance,with its test R-squared improving by 4.2%compared to that of the random forest and by 0.8%compared to that of the multilayer perceptron.This highlights the efficacy of boosting algorithms in refining complex industrial processes.Additionally,our investigation into the impact of varying dataset sizes,ranging from 500 to 20,000 heats,on model accuracy underscores the importance of leveraging larger-scale datasets to improve the accuracy and stability of predictive models.
基金supported by the National Key R&D Program of China(Grant No.2022YFA1005000)the National Natural Science Foundation of China(Grant No.62025110 and 62101308).
文摘Satellite Internet(SI)provides broadband access as a critical information infrastructure in 6G.However,with the integration of the terrestrial Internet,the influx of massive terrestrial traffic will bring significant threats to SI,among which DDoS attack will intensify the erosion of limited bandwidth resources.Therefore,this paper proposes a DDoS attack tracking scheme using a multi-round iterative Viterbi algorithm to achieve high-accuracy attack path reconstruction and fast internal source locking,protecting SI from the source.Firstly,to reduce communication overhead,the logarithmic representation of the traffic volume is added to the digests after modeling SI,generating the lightweight deviation degree to construct the observation probability matrix for the Viterbi algorithm.Secondly,the path node matrix is expanded to multi-index matrices in the Viterbi algorithm to store index information for all probability values,deriving the path with non-repeatability and maximum probability.Finally,multiple rounds of iterative Viterbi tracking are performed locally to track DDoS attack based on trimming tracking results.Simulation and experimental results show that the scheme can achieve 96.8%tracking accuracy of external and internal DDoS attack at 2.5 seconds,with the communication overhead at 268KB/s,effectively protecting the limited bandwidth resources of SI.
基金funded by the National Natural Science Foundation of China (Grant No.62203217)the Jiangsu Province Basic Research Program Natural Science Foundation (Grant No.BK20220885)+3 种基金the Hong Kong,Macao and Taiwan Science and Technology Cooperation Project of Special Foundation in Jiangsu Science and Technology Plan (Grant No.BZ2023057)the Fundamental Research Funds for the Central Universities (Grant No.NJ2024012)the China Postdoctoral Science Foundation (Grant No.GZC20242230)the Postgraduate Research & Practice Innovation Program of Jiangsu Province (Grant No.KYCX24_0586)。
文摘Cooperative task assignment is one of the key research focuses in the field of unmanned aerial vehicles(UAVs). In this paper, an energy learning hyper-heuristic(EL-HH) algorithm is proposed to address the cooperative task assignment problem of heterogeneous UAVs under complex constraints. First, a mathematical model is designed to define the scenario, complex constraints, and objective function of the problem. Then, the scheme encoding, the EL-HH strategy, multiple optimization operators, and the task sequence and time adjustment strategies are designed in the EL-HH algorithm. The scheme encoding is designed with three layers: task sequence, UAV sequence, and waiting time. The EL-HH strategy applies an energy learning method to adaptively adjust the energies of operators, thereby facilitating the selection and application of operators. Multiple optimization operators can update schemes in different ways, enabling the algorithm to fully explore the solution space. Afterward, the task order and time adjustment strategies are designed to adjust task order and insert waiting time. Through the iterative optimization process, a satisfactory assignment scheme is ultimately produced. Finally, simulation and experiment verify the effectiveness of the proposed algorithm.
基金Supported by the National Key R&D Program of China(Grant No.2023YFA1009200)the National Natural Science Foundation of China(Grant Nos.12271079+1 种基金12494552)the Fundamental Research Funds for the Central Universities of China(Grant No.DUT24LAB127)。
文摘In this paper,we focus on the recovery of piecewise sparse signals containing both fast-decaying and slow-decaying nonzero entries.In order to improve the performance of classic Orthogonal Matching Pursuit(OMP)and Generalized Orthogonal Matching Pursuit(GOMP)algorithms for solving this problem,we propose the Piecewise Generalized Orthogonal Matching Pursuit(PGOMP)algorithm,by considering the mixed-decaying sparse signals as piecewise sparse signals with two components containing nonzero entries with different decay factors.The algorithm incorporates piecewise selection and deletion to retain the most significant entries according to the sparsity of each component.We provide a theoretical analysis based on the mutual coherence of the measurement matrix and the decay factors of the nonzero entries,establishing a sufficient condition for the PGOMP algorithm to select at least two correct indices in each iteration.Numerical simulations and an image decomposition experiment demonstrate that the proposed algorithm significantly improves the support recovery probability by effectively matching piecewise sparsity with decay factors.
基金supported by Ladoke Akintola University of Technology,Ogbomoso,Nigeria and the University of Zululand,South Africa.
文摘This study proposes a system for biometric access control utilising the improved Cultural Chicken Swarm Optimization(CCSO)technique.This approach mitigates the limitations of conventional Chicken Swarm Optimization(CSO),especially in dealing with larger dimensions due to diversity loss during solution space exploration.Our experimentation involved 600 sample images encompassing facial,iris,and fingerprint data,collected from 200 students at Ladoke Akintola University of Technology(LAUTECH),Ogbomoso.The results demonstrate the remarkable effectiveness of CCSO,yielding accuracy rates of 90.42%,91.67%,and 91.25%within 54.77,27.35,and 113.92 s for facial,fingerprint,and iris biometrics,respectively.These outcomes significantly outperform those achieved by the conventional CSO technique,which produced accuracy rates of 82.92%,86.25%,and 84.58%at 92.57,63.96,and 163.94 s for the same biometric modalities.The study’s findings reveal that CCSO,through its integration of Cultural Algorithm(CA)Operators into CSO,not only enhances algorithm performance,exhibiting computational efficiency and superior accuracy,but also carries broader implications beyond biometric systems.This innovation offers practical benefits in terms of security enhancement,operational efficiency,and adaptability across diverse user populations,shaping more effective and resource-efficient access control systems with real-world applicability.
文摘Evolutionary algorithms have been extensively utilized in practical applications.However,manually designed population updating formulas are inherently prone to the subjective influence of the designer.Genetic programming(GP),characterized by its tree-based solution structure,is a widely adopted technique for optimizing the structure of mathematical models tailored to real-world problems.This paper introduces a GP-based framework(GPEAs)for the autonomous generation of update formulas,aiming to reduce human intervention.Partial modifications to tree-based GP have been instigated,encompassing adjustments to its initialization process and fundamental update operations such as crossover and mutation within the algorithm.By designing suitable function sets and terminal sets tailored to the selected evolutionary algorithm,and ultimately derive an improved update formula.The Cat Swarm Optimization Algorithm(CSO)is chosen as a case study,and the GP-EAs is employed to regenerate the speed update formulas of the CSO.To validate the feasibility of the GP-EAs,the comprehensive performance of the enhanced algorithm(GP-CSO)was evaluated on the CEC2017 benchmark suite.Furthermore,GP-CSO is applied to deduce suitable embedding factors,thereby improving the robustness of the digital watermarking process.The experimental results indicate that the update formulas generated through training with GP-EAs possess excellent performance scalability and practical application proficiency.
基金supported by the Science and Technology Research Program of Chongqing Municipal Education Commission(KJQN202401501,KJZD-M202401501).
文摘This work proposes an optimization method for gas storage operation parameters under multi-factor coupled constraints to improve the peak-shaving capacity of gas storage reservoirs while ensuring operational safety.Previous research primarily focused on integrating reservoir,wellbore,and surface facility constraints,often resulting in broad constraint ranges and slow model convergence.To solve this problem,the present study introduces additional constraints on maximum withdrawal rates by combining binomial deliverability equations with material balance equations for closed gas reservoirs,while considering extreme peak-shaving demands.This approach effectively narrows the constraint range.Subsequently,a collaborative optimization model with maximum gas production as the objective function is established,and the model employs a joint solution strategy combining genetic algorithms and numerical simulation techniques.Finally,this methodology was applied to optimize operational parameters for Gas Storage T.The results demonstrate:(1)The convergence of the model was achieved after 6 iterations,which significantly improved the convergence speed of the model;(2)The maximum working gas volume reached 11.605×10^(8) m^(3),which increased by 13.78%compared with the traditional optimization method;(3)This method greatly improves the operation safety and the ultimate peak load balancing capability.The research provides important technical support for the intelligent decision of injection and production parameters of gas storage and improving peak load balancing ability.
文摘To extract and display the significant information of combat systems,this paper introduces the methodology of functional cartography into combat networks and proposes an integrated framework named“functional cartography of heterogeneous combat networks based on the operational chain”(FCBOC).In this framework,a functional module detection algorithm named operational chain-based label propagation algorithm(OCLPA),which considers the cooperation and interactions among combat entities and can thus naturally tackle network heterogeneity,is proposed to identify the functional modules of the network.Then,the nodes and their modules are classified into different roles according to their properties.A case study shows that FCBOC can provide a simplified description of disorderly information of combat networks and enable us to identify their functional and structural network characteristics.The results provide useful information to help commanders make precise and accurate decisions regarding the protection,disintegration or optimization of combat networks.Three algorithms are also compared with OCLPA to show that FCBOC can most effectively find functional modules with practical meaning.