In Rayleigh wave exploration,the inversion of dispersion curves is a crucial step for obtaining subsurface stratigraphic information,characterized by its multi-parameter and multi-extremum nature.Local optimization al...In Rayleigh wave exploration,the inversion of dispersion curves is a crucial step for obtaining subsurface stratigraphic information,characterized by its multi-parameter and multi-extremum nature.Local optimization algorithms used in dispersion curve inversion are highly dependent on the initial model and are prone to being trapped in local optima,while classical global optimization algorithms often suffer from slow convergence and low solution accuracy.To address these issues,this study introduces the Osprey Optimization Algorithm(OOA),known for its strong global search and local exploitation capabilities,into the inversion of dispersion curves to enhance inversion performance.In noiseless theoretical models,the OOA demonstrates excellent inversion accuracy and stability,accurately recovering model parameters.Even in noisy models,OOA maintains robust performance,achieving high inversion precision under high-noise conditions.In multimode dispersion curve tests,OOA effectively handles higher modes due to its efficient global and local search capabilities,and the inversion results show high consistency with theoretical values.Field data from the Wyoming region in the United States and a landfill site in Italy further verify the practical applicability of the OOA.Comprehensive test results indicate that the OOA outperforms the Particle Swarm Optimization(PSO)algorithm,providing a highly accurate and reliable inversion strategy for dispersion curve inversion.展开更多
Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,curr...Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.展开更多
Due to the heterogeneity of rock masses and the variability of in situ stress,the traditional linear inversion method is insufficiently accurate to achieve high accuracy of the in situ stress field.To address this cha...Due to the heterogeneity of rock masses and the variability of in situ stress,the traditional linear inversion method is insufficiently accurate to achieve high accuracy of the in situ stress field.To address this challenge,nonlinear stress boundaries for a numerical model are determined through regression analysis of a series of nonlinear coefficient matrices,which are derived from the bubbling method.Considering the randomness and flexibility of the bubbling method,a parametric study is conducted to determine recommended ranges for these parameters,including the standard deviation(σb)of bubble radii,the non-uniform coefficient matrix number(λ)for nonlinear stress boundaries,and the number(m)and positions of in situ stress measurement points.A model case study provides a reference for the selection of these parameters.Additionally,when the nonlinear in situ stress inversion method is employed,stress distortion inevitably occurs near model boundaries,aligning with the Saint Venant's principle.Two strategies are proposed accordingly:employing a systematic reduction of nonlinear coefficients to achieve high inversion accuracy while minimizing significant stress distortion,and excluding regions with severe stress distortion near the model edges while utilizing the central part of the model for subsequent simulations.These two strategies have been successfully implemented in the nonlinear in situ stress inversion of the Xincheng Gold Mine and have achieved higher inversion accuracy than the linear method.Specifically,the linear and nonlinear inversion methods yield root mean square errors(RMSE)of 4.15 and 3.2,and inversion relative errors(δAve)of 22.08%and 17.55%,respectively.Therefore,the nonlinear inversion method outperforms the traditional multiple linear regression method,even in the presence of a systematic reduction in the nonlinear stress boundaries.展开更多
To improve the efficiency and accuracy of path planning for fan inspection tasks in thermal power plants,this paper proposes an intelligent inspection robot path planning scheme based on an improved A^(*)algorithm.The...To improve the efficiency and accuracy of path planning for fan inspection tasks in thermal power plants,this paper proposes an intelligent inspection robot path planning scheme based on an improved A^(*)algorithm.The inspection robot utilizes multiple sensors to monitor key parameters of the fans,such as vibration,noise,and bearing temperature,and upload the data to the monitoring center.The robot’s inspection path employs the improved A^(*)algorithm,incorporating obstacle penalty terms,path reconstruction,and smoothing optimization techniques,thereby achieving optimal path planning for the inspection robot in complex environments.Simulation results demonstrate that the improved A^(*)algorithm significantly outperforms the traditional A^(*)algorithm in terms of total path distance,smoothness,and detour rate,effectively improving the execution efficiency of inspection tasks.展开更多
The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because o...The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.展开更多
Smallholder farming in West Africa faces various challenges, such as limited access to seeds, fertilizers, modern mechanization, and agricultural climate services. Crop productivity obtained under these conditions var...Smallholder farming in West Africa faces various challenges, such as limited access to seeds, fertilizers, modern mechanization, and agricultural climate services. Crop productivity obtained under these conditions varies significantly from one farmer to another, making it challenging to accurately estimate crop production through crop models. This limitation has implications for the reliability of using crop models as agricultural decision-making support tools. To support decision making in agriculture, an approach combining a genetic algorithm (GA) with the crop model AquaCrop is proposed for a location-specific calibration of maize cropping. In this approach, AquaCrop is used to simulate maize crop yield while the GA is used to derive optimal parameters set at grid cell resolution from various combinations of cultivar parameters and crop management in the process of crop and management options calibration. Statistics on pairwise simulated and observed yields indicate that the coefficient of determination varies from 0.20 to 0.65, with a yield deviation ranging from 8% to 36% across Burkina Faso (BF). An analysis of the optimal parameter sets shows that regardless of the climatic zone, a base temperature of 10˚C and an upper temperature of 32˚C is observed in at least 50% of grid cells. The growing season length and the harvest index vary significantly across BF, with the highest values found in the Soudanian zone and the lowest values in the Sahelian zone. Regarding management strategies, the fertility mean rate is approximately 35%, 39%, and 49% for the Sahelian, Soudano-sahelian, and Soudanian zones, respectively. The mean weed cover is around 36%, with the Sahelian and Soudano-sahelian zones showing the highest variability. The proposed approach can be an alternative to the conventional one-size-fits-all approach commonly used for regional crop modeling. Moreover, it has the potential to explore the performance of cropping strategies to adapt to changing climate conditions.展开更多
Machine learning(ML)techniques have emerged as powerful tools for improving the predictive capabilities of Reynolds-averaged Navier-Stokes(RANS)turbulence models in separated flows.This improvement is achieved by leve...Machine learning(ML)techniques have emerged as powerful tools for improving the predictive capabilities of Reynolds-averaged Navier-Stokes(RANS)turbulence models in separated flows.This improvement is achieved by leveraging complex ML models,such as those developed using field inversion and machine learning(FIML),to dynamically adjust the constants within the baseline RANS model.However,the ML models often overlook the fundamental calibrations of the RANS turbulence model.Consequently,the basic calibration of the baseline RANS model is disrupted,leading to a degradation in the accuracy,particularly in basic wall-attached flows outside of the training set.To address this issue,a modified version of the Spalart-Allmaras(SA)turbulence model,known as Rubber-band SA(RBSA),has been proposed recently.This modification involves identifying and embedding constraints related to basic wall-attached flows directly into the model.It is shown that no matter how the parameters of the RBSA model are adjusted as constants throughout the flow field,its accuracy in wall-attached flows remains unaffected.In this paper,we propose a new constraint for the RBSA model,which better safeguards the law of wall in extreme conditions where the model parameter is adjusted dramatically.The resultant model is called the RBSA-poly model.We then show that when combined with FIML augmentation,the RBSA-poly model effectively preserves the accuracy of simple wall-attached flows,even when the adjusted parameters become functions of local flow variables rather than constants.A comparative analysis with the FIML-augmented original SA model reveals that the augmented RBSA-poly model reduces error in basic wall-attached flows by 50%while maintaining comparable accuracy in trained separated flows.These findings confirm the effectiveness of utilizing FIML in conjunction with the RBSA model,offering superior accuracy retention in cardinal flows.展开更多
Algorithms are the primary component of Artificial Intelligence(AI).The algorithm is the process in AI that imitates the human mind to solve problems.Currently evaluating the performance of AI is achieved by evaluatin...Algorithms are the primary component of Artificial Intelligence(AI).The algorithm is the process in AI that imitates the human mind to solve problems.Currently evaluating the performance of AI is achieved by evaluating AI algorithms by metric scores on data sets.However the evaluation of algorithms in AI is challenging because the evaluation of the same type of algorithm has many data sets and evaluation metrics.Different algorithms may have individual strengths and weaknesses in evaluation metric scores on separate data sets,lacking the credibility and validity of the evaluation.Moreover,evaluation of algorithms requires repeated experiments on different data sets,reducing the attention of researchers to the research of the algorithms itself.Crucially,this approach to evaluating comparative metric scores does not take into account the algorithm’s ability to solve problems.And the classical algorithm evaluation of time and space complexity is not suitable for evaluating AI algorithms.Because classical algorithms input is infinite numbers,whereas AI algorithms input is a data set,which is limited and multifarious.According to the AI algorithm evaluation without response to the problem solving capability,this paper summarizes the features of AI algorithm evaluation and proposes an AI evaluation method that incorporates the problem-solving capabilities of algorithms.展开更多
The chirp sub-bottom profiler,for its high resolution,easy accessibility and cost-effectiveness,has been widely used in acoustic detection.In this paper,the acoustic impedance and grain size compositions were obtained...The chirp sub-bottom profiler,for its high resolution,easy accessibility and cost-effectiveness,has been widely used in acoustic detection.In this paper,the acoustic impedance and grain size compositions were obtained based on the chirp sub-bottom profiler data collected in the Chukchi Plateau area during the 11th Arctic Expedition of China.The time-domain adaptive search matching algorithm was used and validated on our established theoretical model.The misfit between the inversion result and the theoretical model is less than 0.067%.The grain size was calculated according to the empirical relationship between the acoustic impedance and the grain size of the sediment.The average acoustic impedance of sub-seafloor strata is 2.5026×10^(6) kg(s m^(2))^(-1)and the average grain size(θvalue)of the seafloor surface sediment is 7.1498,indicating the predominant occurrence of very fine silt sediment in the study area.Comparison of the inversion results and the laboratory measurements of nearby borehole samples shows that they are in general agreement.展开更多
With the intensification of climate change,frequent short-duration heavy rainfall events exert significant impacts on human society and natural environment.Traditional rainfall recognition methods show limitations,inc...With the intensification of climate change,frequent short-duration heavy rainfall events exert significant impacts on human society and natural environment.Traditional rainfall recognition methods show limitations,including poor timeliness,inadequate handling of imbalanced data,and low accuracy when dealing with these events.This paper proposes a method based on CD-Pix2Pix model for inverting short-duration heavy rainfall events,aiming to improve the accuracy of inversion.The method integrates the attention mechanism network CSM-Net and the Dropblock module with a Bayesian optimized loss function to improve imbalanced data processing and enhance overall performance.This study utilizes multisource heterogeneous data,including radar composite reflectivity,FY-4B satellite data,and ground automatic station rainfall observations data,with China Meteorological Administration Land Data Assimilation System(CLDAS)data as the target labels fror the inversion task.Experimental results show that the enhanced method outperforms conventional rainfall inversion methods across multiple evaluation metrics,particularly demonstrating superior performance in Threat Score(TS,0.495),Probability of Detection(POD,0.857),and False Alarm Ratio(FAR,0.143).展开更多
Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been prop...Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been proposed. The Hiking Optimization Algorithm (HOA) have been used in multiple fields. However, HOA suffers from local optimization, slow convergence, and low efficiency of late iteration search when solving cloud task scheduling problems. Thus, this paper proposes an improved HOA called CMOHOA. It collaborates with multi-strategy to improve HOA. Specifically, Chebyshev chaos is introduced to increase population diversity. Then, a hybrid speed update strategy is designed to enhance convergence speed. Meanwhile, an adversarial learning strategy is introduced to enhance the search capability in the late iteration. Different scenarios of scheduling problems are used to test the CMOHOA’s performance. First, CMOHOA was used to solve basic cloud computing task scheduling problems, and the results showed that it reduced the average total cost by 10% or more. Secondly, CMOHOA has been applied to edge fog cloud scheduling problems, and the results show that it reduces the average total scheduling cost by 2% or more. Finally, CMOHOA reduced the average total cost by 7% or more in scheduling problems for information transmission.展开更多
The original intention of the algorithmic recommender system is to grapple with the negative impacts caused by information overload,but the system also can be used as"hypernudge",a new form of online manipul...The original intention of the algorithmic recommender system is to grapple with the negative impacts caused by information overload,but the system also can be used as"hypernudge",a new form of online manipulation,to inten⁃tionally exploit people's cognitive and decision-making gaps to influence their decisions in practice,which is particu⁃larly detrimental to the sustainable development of the digital market.Limiting harmful algorithmic online manipula⁃tion in digital markets has become a challenging task.Globally,both the EU and China have responded to this issue,and the differences between them are so evident that their governance measures can serve as the typical case.The EU focuses on improving citizens'digital literacy and their ability to integrate into digital social life to independently ad⁃dress this issue,and expects to address harmful manipulation behavior through binding and applicable hard law,which is part of the digital strategy.By comparison,although there exist certain legal norms that have made relevant stipula⁃tions on manipulation issues,China continues to issue specific departmental regulations to regulate algorithmic recom⁃mender services,and pays more attention to addressing collective harm caused by algorithmic online manipulation through a multiple co-governance approach led by the government or industry associations to implement supervision.展开更多
This paper examines the impact of algorithmic recommendations and data-driven marketing on consumer engagement and business performance.By leveraging large volumes of user data,businesses can deliver personalized cont...This paper examines the impact of algorithmic recommendations and data-driven marketing on consumer engagement and business performance.By leveraging large volumes of user data,businesses can deliver personalized content that enhances user experiences and increases conversion rates.However,the growing reliance on these technologies introduces significant risks,including privacy violations,algorithmic bias,and ethical concerns.This paper explores these challenges and provides recommendations for businesses to mitigate associated risks while optimizing marketing strategies.It highlights the importance of transparency,fairness,and user control in ensuring responsible and effective data-driven marketing.展开更多
Industrial linear accelerators often contain many bunches when their pulse widths are extended to microseconds.As they typically operate at low electron energies and high currents,the interactions among bunches cannot...Industrial linear accelerators often contain many bunches when their pulse widths are extended to microseconds.As they typically operate at low electron energies and high currents,the interactions among bunches cannot be neglected.In this study,an algorithm is introduced for calculating the space charge force of a train with infinite bunches.By utilizing the ring charge model and the particle-in-cell(PIC)method and combining analytical and numerical methods,the proposed algorithm efficiently calculates the space charge force of infinite bunches,enabling the accurate design of accelerator parameters and a comprehensive understanding of the space charge force.This is a significant improvement on existing simulation software such as ASTRA and PARMELA that can only handle a single bunch or a small number of bunches.The PIC algorithm is validated in long drift space transport by comparing it with existing models,such as the infinite-bunch,ASTRA single-bunch,and PARMELA several-bunch algorithms.The space charge force calculation results for the external acceleration field are also verified.The reliability of the proposed algorithm provides a foundation for the design and optimization of industrial accelerators.展开更多
Thinning of antenna arrays has been a popular topic for the last several decades.With increasing computational power,this optimization task acquired a new hue.This paper suggests a genetic algorithm as an instrument f...Thinning of antenna arrays has been a popular topic for the last several decades.With increasing computational power,this optimization task acquired a new hue.This paper suggests a genetic algorithm as an instrument for antenna array thinning.The algorithm with a deliberately chosen fitness function allows synthesizing thinned linear antenna arrays with low peak sidelobe level(SLL)while maintaining the half-power beamwidth(HPBW)of a full linear antenna array.Based on results from existing papers in the field and known approaches to antenna array thinning,a classification of thinning types is introduced.The optimal thinning type for a linear thinned antenna array is determined on the basis of a maximum attainable SLL.The effect of thinning coefficient on main directional pattern characteristics,such as peak SLL and HPBW,is discussed for a number of amplitude distributions.展开更多
Project construction and development are an impor-tant part of future army designs.In today’s world,intelligent war-fare and joint operations have become the dominant develop-ments in warfare,so the construction and ...Project construction and development are an impor-tant part of future army designs.In today’s world,intelligent war-fare and joint operations have become the dominant develop-ments in warfare,so the construction and development of the army need top-down,top-level design,and comprehensive plan-ning.The traditional project development model is no longer suf-ficient to meet the army’s complex capability requirements.Projects in various fields need to be developed and coordinated to form a joint force and improve the army’s combat effective-ness.At the same time,when a program consists of large-scale project data,the effectiveness of the traditional,precise mathe-matical planning method is greatly reduced because it is time-consuming,costly,and impractical.To solve above problems,this paper proposes a multi-stage program optimization model based on a heterogeneous network and hybrid genetic algo-rithm and verifies the effectiveness and feasibility of the model and algorithm through an example.The results show that the hybrid algorithm proposed in this paper is better than the exist-ing meta-heuristic algorithm.展开更多
Cluster-basedmodels have numerous application scenarios in vehicular ad-hoc networks(VANETs)and can greatly help improve the communication performance of VANETs.However,the frequent movement of vehicles can often lead...Cluster-basedmodels have numerous application scenarios in vehicular ad-hoc networks(VANETs)and can greatly help improve the communication performance of VANETs.However,the frequent movement of vehicles can often lead to changes in the network topology,thereby reducing cluster stability in urban scenarios.To address this issue,we propose a clustering model based on the density peak clustering(DPC)method and sparrow search algorithm(SSA),named SDPC.First,the model constructs a fitness function based on the parameters obtained from the DPC method and deploys the SSA for iterative optimization to select cluster heads(CHs).Then,the vehicles that have not been selected as CHs are assigned to appropriate clusters by comprehensively considering the distance parameter and link-reliability parameter.Finally,cluster maintenance strategies are considered to tackle the changes in the clusters’organizational structure.To verify the performance of the model,we conducted a simulation on a real-world scenario for multiple metrics related to clusters’stability.The results show that compared with the APROVE and the GAPC,SDPC showed clear performance advantages,indicating that SDPC can effectively ensure VANETs’cluster stability in urban scenarios.展开更多
To solve the Poisson equation it is usually possible to discretize it into solving the corresponding linear system Ax=b.Variational quantum algorithms(VQAs)for the discretized Poisson equation have been studied before...To solve the Poisson equation it is usually possible to discretize it into solving the corresponding linear system Ax=b.Variational quantum algorithms(VQAs)for the discretized Poisson equation have been studied before.We present a VQA based on the banded Toeplitz systems for solving the Poisson equation with respect to the structural features of matrix A.In detail,we decompose the matrices A and A^(2)into a linear combination of the corresponding banded Toeplitz matrix and sparse matrices with only a few non-zero elements.For the one-dimensional Poisson equation with different boundary conditions and the d-dimensional Poisson equation with Dirichlet boundary conditions,the number of decomposition terms is less than that reported in[Phys.Rev.A 2023108,032418].Based on the decomposition of the matrix,we design quantum circuits that efficiently evaluate the cost function.Additionally,numerical simulation verifies the feasibility of the proposed algorithm.Finally,the VQAs for linear systems of equations and matrix-vector multiplications with the K-banded Toeplitz matrix T_(n)^(K)are given,where T_(n)^(K)∈R^(n×n)and K∈O(ploylogn).展开更多
Among the four candidate algorithms in the fourth round of NIST standardization,the BIKE(Bit Flipping Key Encapsulation)scheme has a small key size and high efficiency,showing good prospects for application.However,th...Among the four candidate algorithms in the fourth round of NIST standardization,the BIKE(Bit Flipping Key Encapsulation)scheme has a small key size and high efficiency,showing good prospects for application.However,the BIKE scheme based on QC-MDPC(Quasi Cyclic Medium Density Parity Check)codes still faces challenges such as the GJS attack and weak key attacks targeting the decoding failure rate(DFR).This paper analyzes the BGF decoding algorithm of the BIKE scheme,revealing two deep factors that lead to DFR,and proposes a weak key optimization attack method for the BGF decoding algorithm based on these two factors.The proposed method constructs a new weak key set,and experiment results eventually indicate that,considering BIKE’s parameter set targeting 128-bit security,the average decryption failure rate is lowerly bounded by.This result not only highlights a significant vulnerability in the BIKE scheme but also provides valuable insights for future improvements in its design.By addressing these weaknesses,the robustness of QC-MDPC code-based cryptographic systems can be enhanced,paving the way for more secure post-quantum cryptographic solutions.展开更多
The complex plate collision process led the South Yellow Sea Basin(SYSB)to go through an intensity tectonic inversion during the Early Cenozoic,leading to a regional unconformity surface development.As a petroliferous...The complex plate collision process led the South Yellow Sea Basin(SYSB)to go through an intensity tectonic inversion during the Early Cenozoic,leading to a regional unconformity surface development.As a petroliferous basin,SYSB saw intense denudation and deposition processes,making it hard to characterize their source-to-sink system(S2S),and this study provided a new way to reveal them quantitatively.According to the seismic interpretation,it was found that two types of tectonic inversion led to the strata shortening process,which was classified according to their difference in planar movements:dip-slip faults and strike-slip ones.As for dip-slip faults,the inversion structure was primarily formed by the dip-slip movement,and many fault-related folds developed,which developed in the North Depression Zone of the SYSB.The strike-slip ones,accompanied by some negative flower structures,dominate the South Depression Zone of the SYSB.To reveal its source-to-sink(S2S)system in the tectonic inversion basin,we rebuild the provenance area with detrital zircon U-Pb data and heavy mineral assemblage.The results show,during the Eocene(tectonic inversion stage),the proximal slump or fan delta from the Central Uplift Zone was prominently developed in the North Depression Zone,and the South Depression Zone is filled by sediments from the proximal area(Central Uplift Zone in SYSB and Wunansha Uplift)and the prograding delta long-axis parallel to the boundary faults.Then,calculations were conducted on the coarse sediment content,fault displacements,catchment relief,sediment migration distance,and discussions about the impact factors of the S2S system developed in various strata shortening patterns with a statistical method.It was found that,within the dip-slip faults-dominated zone,the volume of the sediment routing system and the ratio of coarse-grained sediments merely have a relationship with the amount of sediment supply and average faults break displacement.Compared with the strike-slip faults-dominated zone,the source-to-sink system shows a lower level of sandy sediment influx,and its coarse-grained content is mainly determined by the average faults broken displacement.展开更多
基金sponsored by China Geological Survey Project(DD20243193 and DD20230206508).
文摘In Rayleigh wave exploration,the inversion of dispersion curves is a crucial step for obtaining subsurface stratigraphic information,characterized by its multi-parameter and multi-extremum nature.Local optimization algorithms used in dispersion curve inversion are highly dependent on the initial model and are prone to being trapped in local optima,while classical global optimization algorithms often suffer from slow convergence and low solution accuracy.To address these issues,this study introduces the Osprey Optimization Algorithm(OOA),known for its strong global search and local exploitation capabilities,into the inversion of dispersion curves to enhance inversion performance.In noiseless theoretical models,the OOA demonstrates excellent inversion accuracy and stability,accurately recovering model parameters.Even in noisy models,OOA maintains robust performance,achieving high inversion precision under high-noise conditions.In multimode dispersion curve tests,OOA effectively handles higher modes due to its efficient global and local search capabilities,and the inversion results show high consistency with theoretical values.Field data from the Wyoming region in the United States and a landfill site in Italy further verify the practical applicability of the OOA.Comprehensive test results indicate that the OOA outperforms the Particle Swarm Optimization(PSO)algorithm,providing a highly accurate and reliable inversion strategy for dispersion curve inversion.
基金supported by the National Natural Science Foundation of China(NSFC)under Grant(No.51677058).
文摘Precisely estimating the state of health(SOH)of lithium-ion batteries is essential for battery management systems(BMS),as it plays a key role in ensuring the safe and reliable operation of battery systems.However,current SOH estimation methods often overlook the valuable temperature information that can effectively characterize battery aging during capacity degradation.Additionally,the Elman neural network,which is commonly employed for SOH estimation,exhibits several drawbacks,including slow training speed,a tendency to become trapped in local minima,and the initialization of weights and thresholds using pseudo-random numbers,leading to unstable model performance.To address these issues,this study addresses the challenge of precise and effective SOH detection by proposing a method for estimating the SOH of lithium-ion batteries based on differential thermal voltammetry(DTV)and an SSA-Elman neural network.Firstly,two health features(HFs)considering temperature factors and battery voltage are extracted fromthe differential thermal voltammetry curves and incremental capacity curves.Next,the Sparrow Search Algorithm(SSA)is employed to optimize the initial weights and thresholds of the Elman neural network,forming the SSA-Elman neural network model.To validate the performance,various neural networks,including the proposed SSA-Elman network,are tested using the Oxford battery aging dataset.The experimental results demonstrate that the method developed in this study achieves superior accuracy and robustness,with a mean absolute error(MAE)of less than 0.9%and a rootmean square error(RMSE)below 1.4%.
基金funded by the National Key R&D Program of China(Grant No.2022YFC2903904)the National Natural Science Foundation of China(Grant Nos.51904057 and U1906208).
文摘Due to the heterogeneity of rock masses and the variability of in situ stress,the traditional linear inversion method is insufficiently accurate to achieve high accuracy of the in situ stress field.To address this challenge,nonlinear stress boundaries for a numerical model are determined through regression analysis of a series of nonlinear coefficient matrices,which are derived from the bubbling method.Considering the randomness and flexibility of the bubbling method,a parametric study is conducted to determine recommended ranges for these parameters,including the standard deviation(σb)of bubble radii,the non-uniform coefficient matrix number(λ)for nonlinear stress boundaries,and the number(m)and positions of in situ stress measurement points.A model case study provides a reference for the selection of these parameters.Additionally,when the nonlinear in situ stress inversion method is employed,stress distortion inevitably occurs near model boundaries,aligning with the Saint Venant's principle.Two strategies are proposed accordingly:employing a systematic reduction of nonlinear coefficients to achieve high inversion accuracy while minimizing significant stress distortion,and excluding regions with severe stress distortion near the model edges while utilizing the central part of the model for subsequent simulations.These two strategies have been successfully implemented in the nonlinear in situ stress inversion of the Xincheng Gold Mine and have achieved higher inversion accuracy than the linear method.Specifically,the linear and nonlinear inversion methods yield root mean square errors(RMSE)of 4.15 and 3.2,and inversion relative errors(δAve)of 22.08%and 17.55%,respectively.Therefore,the nonlinear inversion method outperforms the traditional multiple linear regression method,even in the presence of a systematic reduction in the nonlinear stress boundaries.
文摘To improve the efficiency and accuracy of path planning for fan inspection tasks in thermal power plants,this paper proposes an intelligent inspection robot path planning scheme based on an improved A^(*)algorithm.The inspection robot utilizes multiple sensors to monitor key parameters of the fans,such as vibration,noise,and bearing temperature,and upload the data to the monitoring center.The robot’s inspection path employs the improved A^(*)algorithm,incorporating obstacle penalty terms,path reconstruction,and smoothing optimization techniques,thereby achieving optimal path planning for the inspection robot in complex environments.Simulation results demonstrate that the improved A^(*)algorithm significantly outperforms the traditional A^(*)algorithm in terms of total path distance,smoothness,and detour rate,effectively improving the execution efficiency of inspection tasks.
基金supported in part by the National Key Research and Development Program of China under Grant No.2021YFF0901300in part by the National Natural Science Foundation of China under Grant Nos.62173076 and 72271048.
文摘The distributed permutation flow shop scheduling problem(DPFSP)has received increasing attention in recent years.The iterated greedy algorithm(IGA)serves as a powerful optimizer for addressing such a problem because of its straightforward,single-solution evolution framework.However,a potential draw-back of IGA is the lack of utilization of historical information,which could lead to an imbalance between exploration and exploitation,especially in large-scale DPFSPs.As a consequence,this paper develops an IGA with memory and learning mechanisms(MLIGA)to efficiently solve the DPFSP targeted at the mini-malmakespan.InMLIGA,we incorporate a memory mechanism to make a more informed selection of the initial solution at each stage of the search,by extending,reconstructing,and reinforcing the information from previous solutions.In addition,we design a twolayer cooperative reinforcement learning approach to intelligently determine the key parameters of IGA and the operations of the memory mechanism.Meanwhile,to ensure that the experience generated by each perturbation operator is fully learned and to reduce the prior parameters of MLIGA,a probability curve-based acceptance criterion is proposed by combining a cube root function with custom rules.At last,a discrete adaptive learning rate is employed to enhance the stability of the memory and learningmechanisms.Complete ablation experiments are utilized to verify the effectiveness of the memory mechanism,and the results show that this mechanism is capable of improving the performance of IGA to a large extent.Furthermore,through comparative experiments involving MLIGA and five state-of-the-art algorithms on 720 benchmarks,we have discovered that MLI-GA demonstrates significant potential for solving large-scale DPFSPs.This indicates that MLIGA is well-suited for real-world distributed flow shop scheduling.
文摘Smallholder farming in West Africa faces various challenges, such as limited access to seeds, fertilizers, modern mechanization, and agricultural climate services. Crop productivity obtained under these conditions varies significantly from one farmer to another, making it challenging to accurately estimate crop production through crop models. This limitation has implications for the reliability of using crop models as agricultural decision-making support tools. To support decision making in agriculture, an approach combining a genetic algorithm (GA) with the crop model AquaCrop is proposed for a location-specific calibration of maize cropping. In this approach, AquaCrop is used to simulate maize crop yield while the GA is used to derive optimal parameters set at grid cell resolution from various combinations of cultivar parameters and crop management in the process of crop and management options calibration. Statistics on pairwise simulated and observed yields indicate that the coefficient of determination varies from 0.20 to 0.65, with a yield deviation ranging from 8% to 36% across Burkina Faso (BF). An analysis of the optimal parameter sets shows that regardless of the climatic zone, a base temperature of 10˚C and an upper temperature of 32˚C is observed in at least 50% of grid cells. The growing season length and the harvest index vary significantly across BF, with the highest values found in the Soudanian zone and the lowest values in the Sahelian zone. Regarding management strategies, the fertility mean rate is approximately 35%, 39%, and 49% for the Sahelian, Soudano-sahelian, and Soudanian zones, respectively. The mean weed cover is around 36%, with the Sahelian and Soudano-sahelian zones showing the highest variability. The proposed approach can be an alternative to the conventional one-size-fits-all approach commonly used for regional crop modeling. Moreover, it has the potential to explore the performance of cropping strategies to adapt to changing climate conditions.
基金supported by the National Natural Science Foundation of China(Grant Nos.12388101,12372288,U23A2069,and 92152301).
文摘Machine learning(ML)techniques have emerged as powerful tools for improving the predictive capabilities of Reynolds-averaged Navier-Stokes(RANS)turbulence models in separated flows.This improvement is achieved by leveraging complex ML models,such as those developed using field inversion and machine learning(FIML),to dynamically adjust the constants within the baseline RANS model.However,the ML models often overlook the fundamental calibrations of the RANS turbulence model.Consequently,the basic calibration of the baseline RANS model is disrupted,leading to a degradation in the accuracy,particularly in basic wall-attached flows outside of the training set.To address this issue,a modified version of the Spalart-Allmaras(SA)turbulence model,known as Rubber-band SA(RBSA),has been proposed recently.This modification involves identifying and embedding constraints related to basic wall-attached flows directly into the model.It is shown that no matter how the parameters of the RBSA model are adjusted as constants throughout the flow field,its accuracy in wall-attached flows remains unaffected.In this paper,we propose a new constraint for the RBSA model,which better safeguards the law of wall in extreme conditions where the model parameter is adjusted dramatically.The resultant model is called the RBSA-poly model.We then show that when combined with FIML augmentation,the RBSA-poly model effectively preserves the accuracy of simple wall-attached flows,even when the adjusted parameters become functions of local flow variables rather than constants.A comparative analysis with the FIML-augmented original SA model reveals that the augmented RBSA-poly model reduces error in basic wall-attached flows by 50%while maintaining comparable accuracy in trained separated flows.These findings confirm the effectiveness of utilizing FIML in conjunction with the RBSA model,offering superior accuracy retention in cardinal flows.
基金funded by the General Program of the National Natural Science Foundation of China grant number[62277022].
文摘Algorithms are the primary component of Artificial Intelligence(AI).The algorithm is the process in AI that imitates the human mind to solve problems.Currently evaluating the performance of AI is achieved by evaluating AI algorithms by metric scores on data sets.However the evaluation of algorithms in AI is challenging because the evaluation of the same type of algorithm has many data sets and evaluation metrics.Different algorithms may have individual strengths and weaknesses in evaluation metric scores on separate data sets,lacking the credibility and validity of the evaluation.Moreover,evaluation of algorithms requires repeated experiments on different data sets,reducing the attention of researchers to the research of the algorithms itself.Crucially,this approach to evaluating comparative metric scores does not take into account the algorithm’s ability to solve problems.And the classical algorithm evaluation of time and space complexity is not suitable for evaluating AI algorithms.Because classical algorithms input is infinite numbers,whereas AI algorithms input is a data set,which is limited and multifarious.According to the AI algorithm evaluation without response to the problem solving capability,this paper summarizes the features of AI algorithm evaluation and proposes an AI evaluation method that incorporates the problem-solving capabilities of algorithms.
基金supported by the National Key R&D Program of China (No.2021YFC2801202)the National Natural Science Foundation of China (No.42076224)the Fundamental Research Funds for the Central Universities (No.202262012)。
文摘The chirp sub-bottom profiler,for its high resolution,easy accessibility and cost-effectiveness,has been widely used in acoustic detection.In this paper,the acoustic impedance and grain size compositions were obtained based on the chirp sub-bottom profiler data collected in the Chukchi Plateau area during the 11th Arctic Expedition of China.The time-domain adaptive search matching algorithm was used and validated on our established theoretical model.The misfit between the inversion result and the theoretical model is less than 0.067%.The grain size was calculated according to the empirical relationship between the acoustic impedance and the grain size of the sediment.The average acoustic impedance of sub-seafloor strata is 2.5026×10^(6) kg(s m^(2))^(-1)and the average grain size(θvalue)of the seafloor surface sediment is 7.1498,indicating the predominant occurrence of very fine silt sediment in the study area.Comparison of the inversion results and the laboratory measurements of nearby borehole samples shows that they are in general agreement.
基金Key Project of the NSFC Joint Fund(U20B2061)Innovation Development Special Project(CXFZ2024J001,CXFZ2023J013)+3 种基金Key Open Fund of the Laboratory of Hydrometeorology,China Meteorological Administration(23SWQXZ001)Open Research Fund of Anyang National Climate Observatory(AYNCOF202401)Postgraduate Research&Practice Innovation Program of Jiangsu Province(SJCX24_0478)Zhejiang Provincial Natural Science Foundation Project(LZJMD25D050002)。
文摘With the intensification of climate change,frequent short-duration heavy rainfall events exert significant impacts on human society and natural environment.Traditional rainfall recognition methods show limitations,including poor timeliness,inadequate handling of imbalanced data,and low accuracy when dealing with these events.This paper proposes a method based on CD-Pix2Pix model for inverting short-duration heavy rainfall events,aiming to improve the accuracy of inversion.The method integrates the attention mechanism network CSM-Net and the Dropblock module with a Bayesian optimized loss function to improve imbalanced data processing and enhance overall performance.This study utilizes multisource heterogeneous data,including radar composite reflectivity,FY-4B satellite data,and ground automatic station rainfall observations data,with China Meteorological Administration Land Data Assimilation System(CLDAS)data as the target labels fror the inversion task.Experimental results show that the enhanced method outperforms conventional rainfall inversion methods across multiple evaluation metrics,particularly demonstrating superior performance in Threat Score(TS,0.495),Probability of Detection(POD,0.857),and False Alarm Ratio(FAR,0.143).
基金supported by the National Natural Science Foundation of China (52275480)the Guizhou Provincial Science and Technology Program of Qiankehe Zhongdi Guiding ([2023]02)+1 种基金the Guizhou Provincial Science and Technology Program of Qiankehe Platform Talent Project (GCC[2023]001)the Guizhou Provincial Science and Technology Project of Qiankehe Platform Project (KXJZ[2024]002).
文摘Metaheuristic algorithms are pivotal in cloud task scheduling. However, the complexity and uncertainty of the scheduling problem severely limit algorithms. To bypass this circumvent, numerous algorithms have been proposed. The Hiking Optimization Algorithm (HOA) have been used in multiple fields. However, HOA suffers from local optimization, slow convergence, and low efficiency of late iteration search when solving cloud task scheduling problems. Thus, this paper proposes an improved HOA called CMOHOA. It collaborates with multi-strategy to improve HOA. Specifically, Chebyshev chaos is introduced to increase population diversity. Then, a hybrid speed update strategy is designed to enhance convergence speed. Meanwhile, an adversarial learning strategy is introduced to enhance the search capability in the late iteration. Different scenarios of scheduling problems are used to test the CMOHOA’s performance. First, CMOHOA was used to solve basic cloud computing task scheduling problems, and the results showed that it reduced the average total cost by 10% or more. Secondly, CMOHOA has been applied to edge fog cloud scheduling problems, and the results show that it reduces the average total scheduling cost by 2% or more. Finally, CMOHOA reduced the average total cost by 7% or more in scheduling problems for information transmission.
文摘The original intention of the algorithmic recommender system is to grapple with the negative impacts caused by information overload,but the system also can be used as"hypernudge",a new form of online manipulation,to inten⁃tionally exploit people's cognitive and decision-making gaps to influence their decisions in practice,which is particu⁃larly detrimental to the sustainable development of the digital market.Limiting harmful algorithmic online manipula⁃tion in digital markets has become a challenging task.Globally,both the EU and China have responded to this issue,and the differences between them are so evident that their governance measures can serve as the typical case.The EU focuses on improving citizens'digital literacy and their ability to integrate into digital social life to independently ad⁃dress this issue,and expects to address harmful manipulation behavior through binding and applicable hard law,which is part of the digital strategy.By comparison,although there exist certain legal norms that have made relevant stipula⁃tions on manipulation issues,China continues to issue specific departmental regulations to regulate algorithmic recom⁃mender services,and pays more attention to addressing collective harm caused by algorithmic online manipulation through a multiple co-governance approach led by the government or industry associations to implement supervision.
文摘This paper examines the impact of algorithmic recommendations and data-driven marketing on consumer engagement and business performance.By leveraging large volumes of user data,businesses can deliver personalized content that enhances user experiences and increases conversion rates.However,the growing reliance on these technologies introduces significant risks,including privacy violations,algorithmic bias,and ethical concerns.This paper explores these challenges and provides recommendations for businesses to mitigate associated risks while optimizing marketing strategies.It highlights the importance of transparency,fairness,and user control in ensuring responsible and effective data-driven marketing.
基金supported by the National Key Research and Development Program(No.2022YFC2402300)。
文摘Industrial linear accelerators often contain many bunches when their pulse widths are extended to microseconds.As they typically operate at low electron energies and high currents,the interactions among bunches cannot be neglected.In this study,an algorithm is introduced for calculating the space charge force of a train with infinite bunches.By utilizing the ring charge model and the particle-in-cell(PIC)method and combining analytical and numerical methods,the proposed algorithm efficiently calculates the space charge force of infinite bunches,enabling the accurate design of accelerator parameters and a comprehensive understanding of the space charge force.This is a significant improvement on existing simulation software such as ASTRA and PARMELA that can only handle a single bunch or a small number of bunches.The PIC algorithm is validated in long drift space transport by comparing it with existing models,such as the infinite-bunch,ASTRA single-bunch,and PARMELA several-bunch algorithms.The space charge force calculation results for the external acceleration field are also verified.The reliability of the proposed algorithm provides a foundation for the design and optimization of industrial accelerators.
文摘Thinning of antenna arrays has been a popular topic for the last several decades.With increasing computational power,this optimization task acquired a new hue.This paper suggests a genetic algorithm as an instrument for antenna array thinning.The algorithm with a deliberately chosen fitness function allows synthesizing thinned linear antenna arrays with low peak sidelobe level(SLL)while maintaining the half-power beamwidth(HPBW)of a full linear antenna array.Based on results from existing papers in the field and known approaches to antenna array thinning,a classification of thinning types is introduced.The optimal thinning type for a linear thinned antenna array is determined on the basis of a maximum attainable SLL.The effect of thinning coefficient on main directional pattern characteristics,such as peak SLL and HPBW,is discussed for a number of amplitude distributions.
基金supported by the National Natural Science Foundation of China(724701189072431011).
文摘Project construction and development are an impor-tant part of future army designs.In today’s world,intelligent war-fare and joint operations have become the dominant develop-ments in warfare,so the construction and development of the army need top-down,top-level design,and comprehensive plan-ning.The traditional project development model is no longer suf-ficient to meet the army’s complex capability requirements.Projects in various fields need to be developed and coordinated to form a joint force and improve the army’s combat effective-ness.At the same time,when a program consists of large-scale project data,the effectiveness of the traditional,precise mathe-matical planning method is greatly reduced because it is time-consuming,costly,and impractical.To solve above problems,this paper proposes a multi-stage program optimization model based on a heterogeneous network and hybrid genetic algo-rithm and verifies the effectiveness and feasibility of the model and algorithm through an example.The results show that the hybrid algorithm proposed in this paper is better than the exist-ing meta-heuristic algorithm.
文摘Cluster-basedmodels have numerous application scenarios in vehicular ad-hoc networks(VANETs)and can greatly help improve the communication performance of VANETs.However,the frequent movement of vehicles can often lead to changes in the network topology,thereby reducing cluster stability in urban scenarios.To address this issue,we propose a clustering model based on the density peak clustering(DPC)method and sparrow search algorithm(SSA),named SDPC.First,the model constructs a fitness function based on the parameters obtained from the DPC method and deploys the SSA for iterative optimization to select cluster heads(CHs).Then,the vehicles that have not been selected as CHs are assigned to appropriate clusters by comprehensively considering the distance parameter and link-reliability parameter.Finally,cluster maintenance strategies are considered to tackle the changes in the clusters’organizational structure.To verify the performance of the model,we conducted a simulation on a real-world scenario for multiple metrics related to clusters’stability.The results show that compared with the APROVE and the GAPC,SDPC showed clear performance advantages,indicating that SDPC can effectively ensure VANETs’cluster stability in urban scenarios.
基金supported by the Shandong Provincial Natural Science Foundation for Quantum Science under Grant No.ZR2021LLZ002the Fundamental Research Funds for the Central Universities under Grant No.22CX03005A。
文摘To solve the Poisson equation it is usually possible to discretize it into solving the corresponding linear system Ax=b.Variational quantum algorithms(VQAs)for the discretized Poisson equation have been studied before.We present a VQA based on the banded Toeplitz systems for solving the Poisson equation with respect to the structural features of matrix A.In detail,we decompose the matrices A and A^(2)into a linear combination of the corresponding banded Toeplitz matrix and sparse matrices with only a few non-zero elements.For the one-dimensional Poisson equation with different boundary conditions and the d-dimensional Poisson equation with Dirichlet boundary conditions,the number of decomposition terms is less than that reported in[Phys.Rev.A 2023108,032418].Based on the decomposition of the matrix,we design quantum circuits that efficiently evaluate the cost function.Additionally,numerical simulation verifies the feasibility of the proposed algorithm.Finally,the VQAs for linear systems of equations and matrix-vector multiplications with the K-banded Toeplitz matrix T_(n)^(K)are given,where T_(n)^(K)∈R^(n×n)and K∈O(ploylogn).
基金funded by Beijing Institute of Electronic Science and Technology Postgraduate Excellence Demonstration Course Project(20230002Z0452).
文摘Among the four candidate algorithms in the fourth round of NIST standardization,the BIKE(Bit Flipping Key Encapsulation)scheme has a small key size and high efficiency,showing good prospects for application.However,the BIKE scheme based on QC-MDPC(Quasi Cyclic Medium Density Parity Check)codes still faces challenges such as the GJS attack and weak key attacks targeting the decoding failure rate(DFR).This paper analyzes the BGF decoding algorithm of the BIKE scheme,revealing two deep factors that lead to DFR,and proposes a weak key optimization attack method for the BGF decoding algorithm based on these two factors.The proposed method constructs a new weak key set,and experiment results eventually indicate that,considering BIKE’s parameter set targeting 128-bit security,the average decryption failure rate is lowerly bounded by.This result not only highlights a significant vulnerability in the BIKE scheme but also provides valuable insights for future improvements in its design.By addressing these weaknesses,the robustness of QC-MDPC code-based cryptographic systems can be enhanced,paving the way for more secure post-quantum cryptographic solutions.
基金sponsored by the National Natural Science Foundation of China-Youth Science Fund(No.42402150)the Major State Science and Technology Research Program(No.2016ZX05024002-002)the Chinese Scholarship Council(CSC)。
文摘The complex plate collision process led the South Yellow Sea Basin(SYSB)to go through an intensity tectonic inversion during the Early Cenozoic,leading to a regional unconformity surface development.As a petroliferous basin,SYSB saw intense denudation and deposition processes,making it hard to characterize their source-to-sink system(S2S),and this study provided a new way to reveal them quantitatively.According to the seismic interpretation,it was found that two types of tectonic inversion led to the strata shortening process,which was classified according to their difference in planar movements:dip-slip faults and strike-slip ones.As for dip-slip faults,the inversion structure was primarily formed by the dip-slip movement,and many fault-related folds developed,which developed in the North Depression Zone of the SYSB.The strike-slip ones,accompanied by some negative flower structures,dominate the South Depression Zone of the SYSB.To reveal its source-to-sink(S2S)system in the tectonic inversion basin,we rebuild the provenance area with detrital zircon U-Pb data and heavy mineral assemblage.The results show,during the Eocene(tectonic inversion stage),the proximal slump or fan delta from the Central Uplift Zone was prominently developed in the North Depression Zone,and the South Depression Zone is filled by sediments from the proximal area(Central Uplift Zone in SYSB and Wunansha Uplift)and the prograding delta long-axis parallel to the boundary faults.Then,calculations were conducted on the coarse sediment content,fault displacements,catchment relief,sediment migration distance,and discussions about the impact factors of the S2S system developed in various strata shortening patterns with a statistical method.It was found that,within the dip-slip faults-dominated zone,the volume of the sediment routing system and the ratio of coarse-grained sediments merely have a relationship with the amount of sediment supply and average faults break displacement.Compared with the strike-slip faults-dominated zone,the source-to-sink system shows a lower level of sandy sediment influx,and its coarse-grained content is mainly determined by the average faults broken displacement.