Aiming to solve the steering instability and hysteresis of agricultural robots in the process of movement,a fusion PID control method of particle swarm optimization(PSO)and genetic algorithm(GA)was proposed.The fusion...Aiming to solve the steering instability and hysteresis of agricultural robots in the process of movement,a fusion PID control method of particle swarm optimization(PSO)and genetic algorithm(GA)was proposed.The fusion algorithm took advantage of the fast optimization ability of PSO to optimize the population screening link of GA.The Simulink simulation results showed that the convergence of the fitness function of the fusion algorithm was accelerated,the system response adjustment time was reduced,and the overshoot was almost zero.Then the algorithm was applied to the steering test of agricultural robot in various scenes.After modeling the steering system of agricultural robot,the steering test results in the unloaded suspended state showed that the PID control based on fusion algorithm reduced the rise time,response adjustment time and overshoot of the system,and improved the response speed and stability of the system,compared with the artificial trial and error PID control and the PID control based on GA.The actual road steering test results showed that the PID control response rise time based on the fusion algorithm was the shortest,about 4.43 s.When the target pulse number was set to 100,the actual mean value in the steady-state regulation stage was about 102.9,which was the closest to the target value among the three control methods,and the overshoot was reduced at the same time.The steering test results under various scene states showed that the PID control based on the proposed fusion algorithm had good anti-interference ability,it can adapt to the changes of environment and load and improve the performance of the control system.It was effective in the steering control of agricultural robot.This method can provide a reference for the precise steering control of other robots.展开更多
Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting...Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach.展开更多
This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the compl...This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the complexities,simulation time cost and convergence problems of detailed PV power station models.First,the amplitude–frequency curves of different filter parameters are analyzed.Based on the results,a grouping parameter set for characterizing the external filter characteristics is established.These parameters are further defined as clustering parameters.A single PV inverter model is then established as a prerequisite foundation.The proposed equivalent method combines the global search capability of PSO with the rapid convergence of KMC,effectively overcoming the tendency of KMC to become trapped in local optima.This approach enhances both clustering accuracy and numerical stability when determining equivalence for PV inverter units.Using the proposed clustering method,both a detailed PV power station model and an equivalent model are developed and compared.Simulation and hardwarein-loop(HIL)results based on the equivalent model verify that the equivalent method accurately represents the dynamic characteristics of PVpower stations and adapts well to different operating conditions.The proposed equivalent modeling method provides an effective analysis tool for future renewable energy integration research.展开更多
Existing feature selection methods for intrusion detection systems in the Industrial Internet of Things often suffer from local optimality and high computational complexity.These challenges hinder traditional IDS from...Existing feature selection methods for intrusion detection systems in the Industrial Internet of Things often suffer from local optimality and high computational complexity.These challenges hinder traditional IDS from effectively extracting features while maintaining detection accuracy.This paper proposes an industrial Internet ofThings intrusion detection feature selection algorithm based on an improved whale optimization algorithm(GSLDWOA).The aim is to address the problems that feature selection algorithms under high-dimensional data are prone to,such as local optimality,long detection time,and reduced accuracy.First,the initial population’s diversity is increased using the Gaussian Mutation mechanism.Then,Non-linear Shrinking Factor balances global exploration and local development,avoiding premature convergence.Lastly,Variable-step Levy Flight operator and Dynamic Differential Evolution strategy are introduced to improve the algorithm’s search efficiency and convergence accuracy in highdimensional feature space.Experiments on the NSL-KDD and WUSTL-IIoT-2021 datasets demonstrate that the feature subset selected by GSLDWOA significantly improves detection performance.Compared to the traditional WOA algorithm,the detection rate and F1-score increased by 3.68%and 4.12%.On the WUSTL-IIoT-2021 dataset,accuracy,recall,and F1-score all exceed 99.9%.展开更多
Impact craters are important for understanding the evolution of lunar geologic and surface erosion rates,among other functions.However,the morphological characteristics of these micro impact craters are not obvious an...Impact craters are important for understanding the evolution of lunar geologic and surface erosion rates,among other functions.However,the morphological characteristics of these micro impact craters are not obvious and they are numerous,resulting in low detection accuracy by deep learning models.Therefore,we proposed a new multi-scale fusion crater detection algorithm(MSF-CDA)based on the YOLO11 to improve the accuracy of lunar impact crater detection,especially for small craters with a diameter of<1 km.Using the images taken by the LROC(Lunar Reconnaissance Orbiter Camera)at the Chang’e-4(CE-4)landing area,we constructed three separate datasets for craters with diameters of 0-70 m,70-140 m,and>140 m.We then trained three submodels separately with these three datasets.Additionally,we designed a slicing-amplifying-slicing strategy to enhance the ability to extract features from small craters.To handle redundant predictions,we proposed a new Non-Maximum Suppression with Area Filtering method to fuse the results in overlapping targets within the multi-scale submodels.Finally,our new MSF-CDA method achieved high detection performance,with the Precision,Recall,and F1 score having values of 0.991,0.987,and 0.989,respectively,perfectly addressing the problems induced by the lesser features and sample imbalance of small craters.Our MSF-CDA can provide strong data support for more in-depth study of the geological evolution of the lunar surface and finer geological age estimations.This strategy can also be used to detect other small objects with lesser features and sample imbalance problems.We detected approximately 500,000 impact craters in an area of approximately 214 km2 around the CE-4 landing area.By statistically analyzing the new data,we updated the distribution function of the number and diameter of impact craters.Finally,we identified the most suitable lighting conditions for detecting impact crater targets by analyzing the effect of different lighting conditions on the detection accuracy.展开更多
Essentially, it is significant to supply the consumer with reliable and sufficient power. Since, power quality is measured by the consistency in frequency and power flow between control areas. Thus, in a power system ...Essentially, it is significant to supply the consumer with reliable and sufficient power. Since, power quality is measured by the consistency in frequency and power flow between control areas. Thus, in a power system operation and control,automatic generation control(AGC) plays a crucial role. In this paper, multi-area(Five areas: area 1, area 2, area 3, area 4 and area 5) reheat thermal power systems are considered with proportional-integral-derivative(PID) controller as a supplementary controller. Each area in the investigated power system is equipped with appropriate governor unit, turbine with reheater unit, generator and speed regulator unit. The PID controller parameters are optimized by considering nature bio-inspired firefly algorithm(FFA). The experimental results demonstrated the comparison of the proposed system performance(FFA-PID)with optimized PID controller based genetic algorithm(GAPID) and particle swarm optimization(PSO) technique(PSOPID) for the same investigated power system. The results proved the efficiency of employing the integral time absolute error(ITAE) cost function with one percent step load perturbation(1 % SLP) in area 1. The proposed system based FFA achieved the least settling time compared to using the GA or the PSO algorithms, while, it attained good results with respect to the peak overshoot/undershoot. In addition, the FFA performance is improved with the increased number of iterations which outperformed the other optimization algorithms based controller.展开更多
In this paper, primal-dual interior-point algorithm with dynamic step size is implemented for linear programming (LP) problems. The algorithms are based on a few kernel functions, including both serf-regular functio...In this paper, primal-dual interior-point algorithm with dynamic step size is implemented for linear programming (LP) problems. The algorithms are based on a few kernel functions, including both serf-regular functions and non-serf-regular ones. The dynamic step size is compared with fixed step size for the algorithms in inner iteration of Newton step. Numerical tests show that the algorithms with dynaraic step size are more efficient than those with fixed step size.展开更多
Two existing methods for solving a class of fuzzy linear programming (FLP) problems involving symmetric trapezoidal fuzzy numbers without converting them to crisp linear programming problems are the fuzzy primal simpl...Two existing methods for solving a class of fuzzy linear programming (FLP) problems involving symmetric trapezoidal fuzzy numbers without converting them to crisp linear programming problems are the fuzzy primal simplex method proposed by Ganesan and Veeramani [1] and the fuzzy dual simplex method proposed by Ebrahimnejad and Nasseri [2]. The former method is not applicable when a primal basic feasible solution is not easily at hand and the later method needs to an initial dual basic feasible solution. In this paper, we develop a novel approach namely the primal-dual simplex algorithm to overcome mentioned shortcomings. A numerical example is given to illustrate the proposed approach.展开更多
Neutron computed tomography(NCT)is widely used as a noninvasive measurement technique in nuclear engineering,thermal hydraulics,and cultural heritage.The neutron source intensity of NCT is usually low and the scan tim...Neutron computed tomography(NCT)is widely used as a noninvasive measurement technique in nuclear engineering,thermal hydraulics,and cultural heritage.The neutron source intensity of NCT is usually low and the scan time is long,resulting in a projection image containing severe noise.To reduce the scanning time and increase the image reconstruction quality,an effective reconstruction algorithm must be selected.In CT image reconstruction,the reconstruction algorithms can be divided into three categories:analytical algorithms,iterative algorithms,and deep learning.Because the analytical algorithm requires complete projection data,it is not suitable for reconstruction in harsh environments,such as strong radia-tion,high temperature,and high pressure.Deep learning requires large amounts of data and complex models,which cannot be easily deployed,as well as has a high computational complexity and poor interpretability.Therefore,this paper proposes the OS-SART-PDTV iterative algorithm,which uses the ordered subset simultaneous algebraic reconstruction technique(OS-SART)algorithm to reconstruct the image and the first-order primal–dual algorithm to solve the total variation(PDTV),for sparse-view NCT three-dimensional reconstruction.The novel algorithm was compared with other algorithms(FBP,OS-SART-TV,OS-SART-AwTV,and OS-SART-FGPTV)by simulating the experimental data and actual neutron projection experiments.The reconstruction results demonstrate that the proposed algorithm outperforms the FBP,OS-SART-TV,OS-SART-AwTV,and OS-SART-FGPTV algorithms in terms of preserving edge structure,denoising,and suppressing artifacts.展开更多
The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of...The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.展开更多
We present an all-digital demodulation system of interferometric fiber optic sensor based on an improved arctangent-differential-self-multiplying(arctan-DSM)algorithm.The total harmonic distortion(THD)and the light in...We present an all-digital demodulation system of interferometric fiber optic sensor based on an improved arctangent-differential-self-multiplying(arctan-DSM)algorithm.The total harmonic distortion(THD)and the light intensity disturbance(LID)are also suppressed,the same as those in the traditional arctan-DSM algorithm.Moreover,the lowest sampling frequency is also reduced by introducing anti-aliasing filter,so the occupation of the system memory is reduced.The simulations show that the improved algorithm can correctly demodulate cosine signal and chirp signal with lower sampling frequency.展开更多
Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features ...Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features of the system in a model-driven methodology.Specialized tools interpret these models into other software artifacts such as code,test data and documentation.The generation of test cases permits the appropriate test data to be determined that have the aptitude to ascertain the requirements.This paper focuses on optimizing the test data obtained from UML activity and state chart diagrams by using Basic Genetic Algorithm(BGA).For generating the test cases,both diagrams were converted into their corresponding intermediate graphical forms namely,Activity Diagram Graph(ADG)and State Chart Diagram Graph(SCDG).Then both graphs will be combined to form a single graph called,Activity State Chart Diagram Graph(ASCDG).Both graphs were then joined to create a single graph known as the Activity State Chart Diagram Graph(ASCDG).Next,the ASCDG will be optimized using BGA to generate the test data.A case study involving a withdrawal from the automated teller machine(ATM)of a bank was employed to demonstrate the approach.The approach successfully identified defects in various ATM functions such as messaging and operation.展开更多
In this paper, a primal-dual path-following interior-point algorithm for linearly constrained convex optimization(LCCO) is presented.The algorithm is based on a new technique for finding a class of search directions a...In this paper, a primal-dual path-following interior-point algorithm for linearly constrained convex optimization(LCCO) is presented.The algorithm is based on a new technique for finding a class of search directions and the strategy of the central path.At each iteration, only full-Newton steps are used.Finally, the favorable polynomial complexity bound for the algorithm with the small-update method is deserved, namely, O(√n log n /ε).展开更多
In this paper, a new algorithm which integrates the powerful firefly Mgorithm (FA) and the ant colony optimization (ACO) has been used in tracking control of ship steering for optimization of fractional-order prop...In this paper, a new algorithm which integrates the powerful firefly Mgorithm (FA) and the ant colony optimization (ACO) has been used in tracking control of ship steering for optimization of fractional-order proportional-integral-derivative (FOPID) controller gains. Particle swarm optimization (PSO) algorithm is also used to optimize FOPID controllers, and their performances are compared. It is found that FA optimized FOPID controller gives better performance than others. Sensitivity analysis has been carried out to see the robustness of optimum FOPID gains obtained at nominal conditions to wide changes in system parameters, and the optimum FOPID gains need not be reset for wide changes in system parameters.展开更多
The low-density imaging performance of a zone plate-based nano-resolution hard x-ray computed tomography(CT)system can be significantly improved by incorporating a grating-based Lau interferometer. Due to the diffract...The low-density imaging performance of a zone plate-based nano-resolution hard x-ray computed tomography(CT)system can be significantly improved by incorporating a grating-based Lau interferometer. Due to the diffraction, however,the acquired nano-resolution phase signal may suffer splitting problem, which impedes the direct reconstruction of phase contrast CT(nPCT) images. To overcome, a new model-driven nPCT image reconstruction algorithm is developed in this study. In it, the diffraction procedure is mathematically modeled into a matrix B, from which the projections without signal splitting can be generated invertedly. Furthermore, a penalized weighted least-square model with total variation(PWLSTV) is employed to denoise these projections, from which nPCT images with high accuracy are directly reconstructed.Numerical experiments demonstrate that this new algorithm is able to work with phase projections having any splitting distances. Moreover, results also reveal that nPCT images of higher signal-to-noise-ratio(SNR) could be reconstructed from projections having larger splitting distances. In summary, a novel model-driven nPCT image reconstruction algorithm with high accuracy and robustness is verified for the Lau interferometer-based hard x-ray nano-resolution phase contrast imaging.展开更多
A primal-dual infeasible interior point algorithm for multiple objective linear programming(MOLP)problems was presented.In contrast to the current MOLP algorithm.moving through the interior of polytope but not confini...A primal-dual infeasible interior point algorithm for multiple objective linear programming(MOLP)problems was presented.In contrast to the current MOLP algorithm.moving through the interior of polytope but not confining the iterates within the feasible region in our proposed algorithm result in a solution approach that is quite different and less sensitive to problem size,so providing the potential to dramatically improve the practical computation effectiveness.展开更多
We introduce the potential-decomposition strategy (PDS), which can be used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in...We introduce the potential-decomposition strategy (PDS), which can be used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insumcient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.展开更多
Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,curr...Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.展开更多
Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently...Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.展开更多
Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,th...Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.展开更多
文摘Aiming to solve the steering instability and hysteresis of agricultural robots in the process of movement,a fusion PID control method of particle swarm optimization(PSO)and genetic algorithm(GA)was proposed.The fusion algorithm took advantage of the fast optimization ability of PSO to optimize the population screening link of GA.The Simulink simulation results showed that the convergence of the fitness function of the fusion algorithm was accelerated,the system response adjustment time was reduced,and the overshoot was almost zero.Then the algorithm was applied to the steering test of agricultural robot in various scenes.After modeling the steering system of agricultural robot,the steering test results in the unloaded suspended state showed that the PID control based on fusion algorithm reduced the rise time,response adjustment time and overshoot of the system,and improved the response speed and stability of the system,compared with the artificial trial and error PID control and the PID control based on GA.The actual road steering test results showed that the PID control response rise time based on the fusion algorithm was the shortest,about 4.43 s.When the target pulse number was set to 100,the actual mean value in the steady-state regulation stage was about 102.9,which was the closest to the target value among the three control methods,and the overshoot was reduced at the same time.The steering test results under various scene states showed that the PID control based on the proposed fusion algorithm had good anti-interference ability,it can adapt to the changes of environment and load and improve the performance of the control system.It was effective in the steering control of agricultural robot.This method can provide a reference for the precise steering control of other robots.
基金National Key Research and Development Program of China,No.2023YFC3006704National Natural Science Foundation of China,No.42171047CAS-CSIRO Partnership Joint Project of 2024,No.177GJHZ2023097MI。
文摘Accurate prediction of flood events is important for flood control and risk management.Machine learning techniques contributed greatly to advances in flood predictions,and existing studies mainly focused on predicting flood resource variables using single or hybrid machine learning techniques.However,class-based flood predictions have rarely been investigated,which can aid in quickly diagnosing comprehensive flood characteristics and proposing targeted management strategies.This study proposed a prediction approach of flood regime metrics and event classes coupling machine learning algorithms with clustering-deduced membership degrees.Five algorithms were adopted for this exploration.Results showed that the class membership degrees accurately determined event classes with class hit rates up to 100%,compared with the four classes clustered from nine regime metrics.The nonlinear algorithms(Multiple Linear Regression,Random Forest,and least squares-Support Vector Machine)outperformed the linear techniques(Multiple Linear Regression and Stepwise Regression)in predicting flood regime metrics.The proposed approach well predicted flood event classes with average class hit rates of 66.0%-85.4%and 47.2%-76.0%in calibration and validation periods,respectively,particularly for the slow and late flood events.The predictive capability of the proposed prediction approach for flood regime metrics and classes was considerably stronger than that of hydrological modeling approach.
基金supported by the Research Project of China Southern Power Grid(No.056200KK52222031).
文摘This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the complexities,simulation time cost and convergence problems of detailed PV power station models.First,the amplitude–frequency curves of different filter parameters are analyzed.Based on the results,a grouping parameter set for characterizing the external filter characteristics is established.These parameters are further defined as clustering parameters.A single PV inverter model is then established as a prerequisite foundation.The proposed equivalent method combines the global search capability of PSO with the rapid convergence of KMC,effectively overcoming the tendency of KMC to become trapped in local optima.This approach enhances both clustering accuracy and numerical stability when determining equivalence for PV inverter units.Using the proposed clustering method,both a detailed PV power station model and an equivalent model are developed and compared.Simulation and hardwarein-loop(HIL)results based on the equivalent model verify that the equivalent method accurately represents the dynamic characteristics of PVpower stations and adapts well to different operating conditions.The proposed equivalent modeling method provides an effective analysis tool for future renewable energy integration research.
基金supported by the Major Science and Technology Programs in Henan Province(No.241100210100)Henan Provincial Science and Technology Research Project(No.252102211085,No.252102211105)+3 种基金Endogenous Security Cloud Network Convergence R&D Center(No.602431011PQ1)The Special Project for Research and Development in Key Areas of Guangdong Province(No.2021ZDZX1098)The Stabilization Support Program of Science,Technology and Innovation Commission of Shenzhen Municipality(No.20231128083944001)The Key scientific research projects of Henan higher education institutions(No.24A520042).
文摘Existing feature selection methods for intrusion detection systems in the Industrial Internet of Things often suffer from local optimality and high computational complexity.These challenges hinder traditional IDS from effectively extracting features while maintaining detection accuracy.This paper proposes an industrial Internet ofThings intrusion detection feature selection algorithm based on an improved whale optimization algorithm(GSLDWOA).The aim is to address the problems that feature selection algorithms under high-dimensional data are prone to,such as local optimality,long detection time,and reduced accuracy.First,the initial population’s diversity is increased using the Gaussian Mutation mechanism.Then,Non-linear Shrinking Factor balances global exploration and local development,avoiding premature convergence.Lastly,Variable-step Levy Flight operator and Dynamic Differential Evolution strategy are introduced to improve the algorithm’s search efficiency and convergence accuracy in highdimensional feature space.Experiments on the NSL-KDD and WUSTL-IIoT-2021 datasets demonstrate that the feature subset selected by GSLDWOA significantly improves detection performance.Compared to the traditional WOA algorithm,the detection rate and F1-score increased by 3.68%and 4.12%.On the WUSTL-IIoT-2021 dataset,accuracy,recall,and F1-score all exceed 99.9%.
基金the National Key Research and Development Program of China(Grant No.2022YFF0711400)which provided valuable financial support and resources for my research and made it possible for me to deeply explore the unknown mysteries in the field of lunar geologythe National Space Science Data Center Youth Open Project(Grant No.NSSDC2302001),which has not only facilitated the smooth progress of my research,but has also built a platform for me to communicate and cooperate with experts in the field.
文摘Impact craters are important for understanding the evolution of lunar geologic and surface erosion rates,among other functions.However,the morphological characteristics of these micro impact craters are not obvious and they are numerous,resulting in low detection accuracy by deep learning models.Therefore,we proposed a new multi-scale fusion crater detection algorithm(MSF-CDA)based on the YOLO11 to improve the accuracy of lunar impact crater detection,especially for small craters with a diameter of<1 km.Using the images taken by the LROC(Lunar Reconnaissance Orbiter Camera)at the Chang’e-4(CE-4)landing area,we constructed three separate datasets for craters with diameters of 0-70 m,70-140 m,and>140 m.We then trained three submodels separately with these three datasets.Additionally,we designed a slicing-amplifying-slicing strategy to enhance the ability to extract features from small craters.To handle redundant predictions,we proposed a new Non-Maximum Suppression with Area Filtering method to fuse the results in overlapping targets within the multi-scale submodels.Finally,our new MSF-CDA method achieved high detection performance,with the Precision,Recall,and F1 score having values of 0.991,0.987,and 0.989,respectively,perfectly addressing the problems induced by the lesser features and sample imbalance of small craters.Our MSF-CDA can provide strong data support for more in-depth study of the geological evolution of the lunar surface and finer geological age estimations.This strategy can also be used to detect other small objects with lesser features and sample imbalance problems.We detected approximately 500,000 impact craters in an area of approximately 214 km2 around the CE-4 landing area.By statistically analyzing the new data,we updated the distribution function of the number and diameter of impact craters.Finally,we identified the most suitable lighting conditions for detecting impact crater targets by analyzing the effect of different lighting conditions on the detection accuracy.
文摘Essentially, it is significant to supply the consumer with reliable and sufficient power. Since, power quality is measured by the consistency in frequency and power flow between control areas. Thus, in a power system operation and control,automatic generation control(AGC) plays a crucial role. In this paper, multi-area(Five areas: area 1, area 2, area 3, area 4 and area 5) reheat thermal power systems are considered with proportional-integral-derivative(PID) controller as a supplementary controller. Each area in the investigated power system is equipped with appropriate governor unit, turbine with reheater unit, generator and speed regulator unit. The PID controller parameters are optimized by considering nature bio-inspired firefly algorithm(FFA). The experimental results demonstrated the comparison of the proposed system performance(FFA-PID)with optimized PID controller based genetic algorithm(GAPID) and particle swarm optimization(PSO) technique(PSOPID) for the same investigated power system. The results proved the efficiency of employing the integral time absolute error(ITAE) cost function with one percent step load perturbation(1 % SLP) in area 1. The proposed system based FFA achieved the least settling time compared to using the GA or the PSO algorithms, while, it attained good results with respect to the peak overshoot/undershoot. In addition, the FFA performance is improved with the increased number of iterations which outperformed the other optimization algorithms based controller.
基金Project supported by Dutch Organization for Scientific Research(Grant No .613 .000 .010)
文摘In this paper, primal-dual interior-point algorithm with dynamic step size is implemented for linear programming (LP) problems. The algorithms are based on a few kernel functions, including both serf-regular functions and non-serf-regular ones. The dynamic step size is compared with fixed step size for the algorithms in inner iteration of Newton step. Numerical tests show that the algorithms with dynaraic step size are more efficient than those with fixed step size.
文摘Two existing methods for solving a class of fuzzy linear programming (FLP) problems involving symmetric trapezoidal fuzzy numbers without converting them to crisp linear programming problems are the fuzzy primal simplex method proposed by Ganesan and Veeramani [1] and the fuzzy dual simplex method proposed by Ebrahimnejad and Nasseri [2]. The former method is not applicable when a primal basic feasible solution is not easily at hand and the later method needs to an initial dual basic feasible solution. In this paper, we develop a novel approach namely the primal-dual simplex algorithm to overcome mentioned shortcomings. A numerical example is given to illustrate the proposed approach.
基金supported by the National Key Research and Development Program of China(No.2022YFB1902700)the Joint Fund of Ministry of Education for Equipment Pre-research(No.8091B042203)+5 种基金the National Natural Science Foundation of China(No.11875129)the Fund of the State Key Laboratory of Intense Pulsed Radiation Simulation and Effect(No.SKLIPR1810)the Fund of Innovation Center of Radiation Application(No.KFZC2020020402)the Fund of the State Key Laboratory of Nuclear Physics and Technology,Peking University(No.NPT2023KFY06)the Joint Innovation Fund of China National Uranium Co.,Ltd.,State Key Laboratory of Nuclear Resources and Environment,East China University of Technology(No.2022NRE-LH-02)the Fundamental Research Funds for the Central Universities(No.2023JG001).
文摘Neutron computed tomography(NCT)is widely used as a noninvasive measurement technique in nuclear engineering,thermal hydraulics,and cultural heritage.The neutron source intensity of NCT is usually low and the scan time is long,resulting in a projection image containing severe noise.To reduce the scanning time and increase the image reconstruction quality,an effective reconstruction algorithm must be selected.In CT image reconstruction,the reconstruction algorithms can be divided into three categories:analytical algorithms,iterative algorithms,and deep learning.Because the analytical algorithm requires complete projection data,it is not suitable for reconstruction in harsh environments,such as strong radia-tion,high temperature,and high pressure.Deep learning requires large amounts of data and complex models,which cannot be easily deployed,as well as has a high computational complexity and poor interpretability.Therefore,this paper proposes the OS-SART-PDTV iterative algorithm,which uses the ordered subset simultaneous algebraic reconstruction technique(OS-SART)algorithm to reconstruct the image and the first-order primal–dual algorithm to solve the total variation(PDTV),for sparse-view NCT three-dimensional reconstruction.The novel algorithm was compared with other algorithms(FBP,OS-SART-TV,OS-SART-AwTV,and OS-SART-FGPTV)by simulating the experimental data and actual neutron projection experiments.The reconstruction results demonstrate that the proposed algorithm outperforms the FBP,OS-SART-TV,OS-SART-AwTV,and OS-SART-FGPTV algorithms in terms of preserving edge structure,denoising,and suppressing artifacts.
基金supported by the Knut and Alice Wallenberg Foundationthe Swedish Foundation for Strategic Research+1 种基金the Swedish Research Councilthe National Natural Science Foundation of China(62133003,61991403,61991404,61991400)。
文摘The distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of n local cost functions by using local information exchange is considered.This problem is an important component of many machine learning techniques with data parallelism,such as deep learning and federated learning.We propose a distributed primal-dual stochastic gradient descent(SGD)algorithm,suitable for arbitrarily connected communication networks and any smooth(possibly nonconvex)cost functions.We show that the proposed algorithm achieves the linear speedup convergence rate O(1/(√nT))for general nonconvex cost functions and the linear speedup convergence rate O(1/(nT)) when the global cost function satisfies the Polyak-Lojasiewicz(P-L)condition,where T is the total number of iterations.We also show that the output of the proposed algorithm with constant parameters linearly converges to a neighborhood of a global optimum.We demonstrate through numerical experiments the efficiency of our algorithm in comparison with the baseline centralized SGD and recently proposed distributed SGD algorithms.
文摘We present an all-digital demodulation system of interferometric fiber optic sensor based on an improved arctangent-differential-self-multiplying(arctan-DSM)algorithm.The total harmonic distortion(THD)and the light intensity disturbance(LID)are also suppressed,the same as those in the traditional arctan-DSM algorithm.Moreover,the lowest sampling frequency is also reduced by introducing anti-aliasing filter,so the occupation of the system memory is reduced.The simulations show that the improved algorithm can correctly demodulate cosine signal and chirp signal with lower sampling frequency.
基金support from the Deanship of Scientific Research,University of Hail,Saudi Arabia through the project Ref.(RG-191315).
文摘Software testing has been attracting a lot of attention for effective software development.In model driven approach,Unified Modelling Language(UML)is a conceptual modelling approach for obligations and other features of the system in a model-driven methodology.Specialized tools interpret these models into other software artifacts such as code,test data and documentation.The generation of test cases permits the appropriate test data to be determined that have the aptitude to ascertain the requirements.This paper focuses on optimizing the test data obtained from UML activity and state chart diagrams by using Basic Genetic Algorithm(BGA).For generating the test cases,both diagrams were converted into their corresponding intermediate graphical forms namely,Activity Diagram Graph(ADG)and State Chart Diagram Graph(SCDG).Then both graphs will be combined to form a single graph called,Activity State Chart Diagram Graph(ASCDG).Both graphs were then joined to create a single graph known as the Activity State Chart Diagram Graph(ASCDG).Next,the ASCDG will be optimized using BGA to generate the test data.A case study involving a withdrawal from the automated teller machine(ATM)of a bank was employed to demonstrate the approach.The approach successfully identified defects in various ATM functions such as messaging and operation.
基金supported by the Shanghai Pujiang Program (Grant No.06PJ14039)the Science Foundation of Shanghai Municipal Commission of Education (Grant No.06NS031)
文摘In this paper, a primal-dual path-following interior-point algorithm for linearly constrained convex optimization(LCCO) is presented.The algorithm is based on a new technique for finding a class of search directions and the strategy of the central path.At each iteration, only full-Newton steps are used.Finally, the favorable polynomial complexity bound for the algorithm with the small-update method is deserved, namely, O(√n log n /ε).
基金the National Natural Science Foundation of China(No.51109090)the Natural Fund of Fujian Province(No.2015J01214)+2 种基金the Key Project of Fujian Provincial Department of Science & Technology(No.2012H0030)the University’s Innovative Project of Xiamen Science & Technology Bureau(No.3502Z20123019)the Project of New Century Excellent Talents of Colleges and Universities of Fujian Province(No.JA12181)
文摘In this paper, a new algorithm which integrates the powerful firefly Mgorithm (FA) and the ant colony optimization (ACO) has been used in tracking control of ship steering for optimization of fractional-order proportional-integral-derivative (FOPID) controller gains. Particle swarm optimization (PSO) algorithm is also used to optimize FOPID controllers, and their performances are compared. It is found that FA optimized FOPID controller gives better performance than others. Sensitivity analysis has been carried out to see the robustness of optimum FOPID gains obtained at nominal conditions to wide changes in system parameters, and the optimum FOPID gains need not be reset for wide changes in system parameters.
基金Project supported by the National Natural Science Foundation of China(Grant No.12027812)the Guangdong Basic and Applied Basic Research Foundation of Guangdong Province,China(Grant No.2021A1515111031)。
文摘The low-density imaging performance of a zone plate-based nano-resolution hard x-ray computed tomography(CT)system can be significantly improved by incorporating a grating-based Lau interferometer. Due to the diffraction, however,the acquired nano-resolution phase signal may suffer splitting problem, which impedes the direct reconstruction of phase contrast CT(nPCT) images. To overcome, a new model-driven nPCT image reconstruction algorithm is developed in this study. In it, the diffraction procedure is mathematically modeled into a matrix B, from which the projections without signal splitting can be generated invertedly. Furthermore, a penalized weighted least-square model with total variation(PWLSTV) is employed to denoise these projections, from which nPCT images with high accuracy are directly reconstructed.Numerical experiments demonstrate that this new algorithm is able to work with phase projections having any splitting distances. Moreover, results also reveal that nPCT images of higher signal-to-noise-ratio(SNR) could be reconstructed from projections having larger splitting distances. In summary, a novel model-driven nPCT image reconstruction algorithm with high accuracy and robustness is verified for the Lau interferometer-based hard x-ray nano-resolution phase contrast imaging.
基金Supported by the Doctoral Educational Foundation of China of the Ministry of Education(20020486035)
文摘A primal-dual infeasible interior point algorithm for multiple objective linear programming(MOLP)problems was presented.In contrast to the current MOLP algorithm.moving through the interior of polytope but not confining the iterates within the feasible region in our proposed algorithm result in a solution approach that is quite different and less sensitive to problem size,so providing the potential to dramatically improve the practical computation effectiveness.
基金Supported by the National Natural Science Foundation of China under Grant Nos.10674016,10875013the Specialized Research Foundation for the Doctoral Program of Higher Education under Grant No.20080027005
文摘We introduce the potential-decomposition strategy (PDS), which can be used in Markov chain Monte Carlo sampling algorithms. PDS can be designed to make particles move in a modified potential that favors diffusion in phase space, then, by rejecting some trial samples, the target distributions can be sampled in an unbiased manner. Furthermore, if the accepted trial samples are insumcient, they can be recycled as initial states to form more unbiased samples. This strategy can greatly improve efficiency when the original potential has multiple metastable states separated by large barriers. We apply PDS to the 2d Ising model and a double-well potential model with a large barrier, demonstrating in these two representative examples that convergence is accelerated by orders of magnitude.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant(No.51677058).
文摘Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.
基金National Natural Science Foundation of China(11971211,12171388).
文摘Complex network models are frequently employed for simulating and studyingdiverse real-world complex systems.Among these models,scale-free networks typically exhibit greater fragility to malicious attacks.Consequently,enhancing the robustness of scale-free networks has become a pressing issue.To address this problem,this paper proposes a Multi-Granularity Integration Algorithm(MGIA),which aims to improve the robustness of scale-free networks while keeping the initial degree of each node unchanged,ensuring network connectivity and avoiding the generation of multiple edges.The algorithm generates a multi-granularity structure from the initial network to be optimized,then uses different optimization strategies to optimize the networks at various granular layers in this structure,and finally realizes the information exchange between different granular layers,thereby further enhancing the optimization effect.We propose new network refresh,crossover,and mutation operators to ensure that the optimized network satisfies the given constraints.Meanwhile,we propose new network similarity and network dissimilarity evaluation metrics to improve the effectiveness of the optimization operators in the algorithm.In the experiments,the MGIA enhances the robustness of the scale-free network by 67.6%.This improvement is approximately 17.2%higher than the optimization effects achieved by eight currently existing complex network robustness optimization algorithms.
基金supported by Yunnan Provincial Basic Research Project(202401AT070344,202301AT070443)National Natural Science Foundation of China(62263014,52207105)+1 种基金Yunnan Lancang-Mekong International Electric Power Technology Joint Laboratory(202203AP140001)Major Science and Technology Projects in Yunnan Province(202402AG050006).
文摘Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.