The excavation of deep tunnels crossing faults is highly prone to triggering rockburst disasters,which has become a significant engineering issue.In this study,taking the fault-slip rockbursts from a deep tunnel in so...The excavation of deep tunnels crossing faults is highly prone to triggering rockburst disasters,which has become a significant engineering issue.In this study,taking the fault-slip rockbursts from a deep tunnel in southwestern China as the engineering prototype,large-scale three-dimensional(3D)physical model tests were conducted on a 3D-printed complex geological model containing two faults.Based on the selfdeveloped 3D loading system and excavation device,the macroscopic failure of fault-slip rockbursts was simulated indoors.The stress,strain,and fracturing characteristics of the surrounding rock near the two faults were systematically evaluated during excavation and multistage loading.The test results effectively revealed the evolution and triggering mechanism of fault-slip rockbursts.After the excavation of a highstress tunnel,stress readjustment occurred.Owing to the presence of these two faults,stress continued to accumulate in the rock mass between them,leading to the accumulation of fractures.When the shear stress on a fault surface exceeded its shear strength,sudden fault slip and dislocation occurred,thus triggering rockbursts.Rockbursts occurred twice in the vault between the two faults,showing obvious intermittent characteristics.The rockburst pit was controlled by two faults.When the faults remained stable,tensile failure predominated in the surrounding rock.However,when the fault slip was triggered,shear failure in the surrounding rock increased.These findings provide valuable insights for enhancing the comprehension of fault-slip rockbursts.展开更多
The research on optimization methods for constellation launch deployment strategies focused on the consideration of mission interval time constraints at the launch site.Firstly,a dynamic modeling of the constellation ...The research on optimization methods for constellation launch deployment strategies focused on the consideration of mission interval time constraints at the launch site.Firstly,a dynamic modeling of the constellation deployment process was established,and the relationship between the deployment window and the phase difference of the orbit insertion point,as well as the cost of phase adjustment after orbit insertion,was derived.Then,the combination of the constellation deployment position sequence was treated as a parameter,together with the sequence of satellite deployment intervals,as optimization variables,simplifying a highdimensional search problem within a wide range of dates to a finite-dimensional integer programming problem.An improved genetic algorithm with local search on deployment dates was introduced to optimize the launch deployment strategy.With the new description of the optimization variables,the total number of elements in the solution space was reduced by N orders of magnitude.Numerical simulation confirms that the proposed optimization method accelerates the convergence speed from hours to minutes.展开更多
Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,...Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,the robustness of solutions obtained by these methods faces great challenges when the complexity of the PV model increases.The unstable results will affect the reliable operation and maintenance strategies of PV systems.In response to this challenge,an improved rime optimization algorithm with enhanced exploration and exploitation,termed TERIME,is proposed for robust and accurate parameter identification for various PV models.Specifically,the differential evolution mutation operator is integrated in the exploration phase to enhance the population diversity.Meanwhile,a new exploitation strategy incorporating randomization and neighborhood strategies simultaneously is developed to maintain the balance of exploitation width and depth.The TERIME algorithm is applied to estimate the optimal parameters of the single diode model,double diode model,and triple diode model combined with the Lambert-W function for three PV cell and module types including RTC France,Photo Watt-PWP 201 and S75.According to the statistical analysis in 100 runs,the proposed algorithm achieves more accurate and robust parameter estimations than other techniques to various PV models in varying environmental conditions.All of our source codes are publicly available at https://github.com/dirge1/TERIME.展开更多
The estimation of the probability of informed trading(PIN)model and its extensions poses significant challenges owing to various computational problems.To address these issues,we propose a novel estimation method call...The estimation of the probability of informed trading(PIN)model and its extensions poses significant challenges owing to various computational problems.To address these issues,we propose a novel estimation method called the expectation-conditional-maximization(ECM)algorithm,which can serve as an alternative to the existing methods for estimating PIN models.Our method provides optimal estimates for the original PIN model as well as two of its extensions:the multilayer PIN model and the adjusted PIN model,along with its restricted versions.Our results indicate that estimations using the ECM algorithm are generally faster,more accurate,and more memory-efficient than the standard methods used in the literature,making it a robust alternative.More importantly,the ECM algorithm is not limited to the models discussed and can be easily adapted to estimate future extensions of the PIN model.展开更多
When dealing with expensive multiobjective optimization problems,majority of existing surrogate-assisted evolutionary algorithms(SAEAs)generate solutions in decision space and screen candidate solutions mostly by usin...When dealing with expensive multiobjective optimization problems,majority of existing surrogate-assisted evolutionary algorithms(SAEAs)generate solutions in decision space and screen candidate solutions mostly by using designed surrogate models.The generated solutions exhibit excessive randomness,which tends to reduce the likelihood of generating good-quality solutions and cause a long evolution to the optima.To improve SAEAs greatly,this work proposes an evolutionary algorithm based on surrogate and inverse surrogate models by 1)Employing a surrogate model in lieu of expensive(true)function evaluations;and 2)Proposing and using an inverse surrogate model to generate new solutions.By using the same training data but with its inputs and outputs being reversed,the latter is simple to train.It is then used to generate new vectors in objective space,which are mapped into decision space to obtain their corresponding solutions.Using a particular example,this work shows its advantages over existing SAEAs.The results of comparing it with state-of-the-art algorithms on expensive optimization problems show that it is highly competitive in both solution performance and efficiency.展开更多
Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection me...Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.展开更多
In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of faul...In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of fault sections is developed in the forward model and the message passing interface (MPI) approach is chosen to parallel the genetic algorithms by global sin-gle-population master-slave method (GPGAs). The proposed approach is applied to a sample system consisting of 28 sections, 84 protective relays and 40 circuit breakers. Simulation results show that the new model based on GPGAs can achieve very fast computation in online applications of large-scale power systems.展开更多
Blasting is well-known as an effective method for fragmenting or moving rock in open-pit mines.To evaluate the quality of blasting,the size of rock distribution is used as a critical criterion in blasting operations.A...Blasting is well-known as an effective method for fragmenting or moving rock in open-pit mines.To evaluate the quality of blasting,the size of rock distribution is used as a critical criterion in blasting operations.A high percentage of oversized rocks generated by blasting operations can lead to economic and environmental damage.Therefore,this study proposed four novel intelligent models to predict the size of rock distribution in mine blasting in order to optimize blasting parameters,as well as the efficiency of blasting operation in open mines.Accordingly,a nature-inspired algorithm(i.e.,firefly algorithm-FFA)and different machine learning algorithms(i.e.,gradient boosting machine(GBM),support vector machine(SVM),Gaussian process(GP),and artificial neural network(ANN))were combined for this aim,abbreviated as FFA-GBM,FFA-SVM,FFA-GP,and FFA-ANN,respectively.Subsequently,predicted results from the abovementioned models were compared with each other using three statistical indicators(e.g.,mean absolute error,root-mean-squared error,and correlation coefficient)and color intensity method.For developing and simulating the size of rock in blasting operations,136 blasting events with their images were collected and analyzed by the Split-Desktop software.In which,111 events were randomly selected for the development and optimization of the models.Subsequently,the remaining 25 blasting events were applied to confirm the accuracy of the proposed models.Herein,blast design parameters were regarded as input variables to predict the size of rock in blasting operations.Finally,the obtained results revealed that the FFA is a robust optimization algorithm for estimating rock fragmentation in bench blasting.Among the models developed in this study,FFA-GBM provided the highest accuracy in predicting the size of fragmented rocks.The other techniques(i.e.,FFA-SVM,FFA-GP,and FFA-ANN)yielded lower computational stability and efficiency.Hence,the FFA-GBM model can be used as a powerful and precise soft computing tool that can be applied to practical engineering cases aiming to improve the quality of blasting and rock fragmentation.展开更多
Many biodynamic models have been derived using trial and error curve-fitting technique, such that the error between the computed and measured biodynamic response functions is minimum. This study developed a biomechani...Many biodynamic models have been derived using trial and error curve-fitting technique, such that the error between the computed and measured biodynamic response functions is minimum. This study developed a biomechanical model of the human body in a sitting posture without backrest for evaluating the vibration transmissibility and dynamic response to vertical vibration direction. In describing the human body motion, a three biomechanical models are discussed (two models are 4-DOF and one model 7-DOF). Optimization software based on stochastic techniques search methods, Genetic Algorithms (GAs), is employed to determine the human model parameters imposing some limit constraints on the model parameters. In addition, an objective function is formulated comprising the sum of errors between the computed and actual values (experimental data). The studied functions are the driving-point mechanical impedance, apparent mass and seat- to-head transmissibility functions. The optimization process increased the average goodness of fit and the results of studied functions became much closer to the target values (Experimental data). From the optimized model, the resonant frequencies of the driver parts computed on the basis of biodynamic response functions are found to be within close bounds to that expected for the human body.展开更多
Direct soil temperature(ST)measurement is time-consuming and costly;thus,the use of simple and cost-effective machine learning(ML)tools is helpful.In this study,ML approaches,including KStar,instance-based K-nearest l...Direct soil temperature(ST)measurement is time-consuming and costly;thus,the use of simple and cost-effective machine learning(ML)tools is helpful.In this study,ML approaches,including KStar,instance-based K-nearest learning(IBK),and locally weighted learning(LWL),coupled with resampling algorithms of bagging(BA)and dagging(DA)(BA-IBK,BA-KStar,BA-LWL,DA-IBK,DA-KStar,and DA-LWL)were developed and tested for multi-step ahead(3,6,and 9 d ahead)ST forecasting.In addition,a linear regression(LR)model was used as a benchmark to evaluate the results.A dataset was established,with daily ST time-series at 5 and 50 cm soil depths in a farmland as models’output and meteorological data as models’input,including mean(T_(mean)),minimum(Tmin),and maximum(T_(max))air temperatures,evaporation(Eva),sunshine hours(SSH),and solar radiation(SR),which were collected at Isfahan Synoptic Station(Iran)for 13 years(1992–2005).Six different input combination scenarios were selected based on Pearson’s correlation coefficients between inputs and outputs and fed into the models.We used 70%of the data to train the models,with the remaining 30%used for model evaluation via multiple visual and quantitative metrics.Our?ndings showed that T_(mean)was the most effective input variable for ST forecasting in most of the developed models,while in some cases the combinations of variables,including T_(mean)and T_(max)and T_(mean),T_(max),Tmin,Eva,and SSH proved to be the best input combinations.Among the evaluated models,BA-KStar showed greater compatibility,while in most cases,BA-IBK and-LWL provided more accurate results,depending on soil depth.For the 5 cm soil depth,BA-KStar had superior performance(i.e.,Nash-Sutcliffe efficiency(NSE)=0.90,0.87,and 0.85 for 3,6,and 9 d ahead forecasting,respectively);for the 50 cm soil depth,DA-KStar outperformed the other models(i.e.,NSE=0.88,0.89,and 0.89 for 3,6,and 9 d ahead forecasting,respectively).The results con?rmed that all hybrid models had higher prediction capabilities than the LR model.展开更多
Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to tr...Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.展开更多
This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platfo...This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platform dimensional parameters in relation to motion responses.Although the three-dimensional potential flow(TDPF)panel method is recognized for its precision in calculating FOWT motion responses,its computational intensity necessitates an alternative approach for efficiency.Herein,a novel application of varying fidelity frequency-domain computational strategies is introduced,which synthesizes the strip theory with the TDPF panel method to strike a balance between computational speed and accuracy.The Co-Kriging algorithm is employed to forge a surrogate model that amalgamates these computational strategies.Optimization objectives are centered on the platform’s motion response in heave and pitch directions under general sea conditions.The steel usage,the range of design variables,and geometric considerations are optimization constraints.The angle of the pontoons,the number of columns,the radius of the central column and the parameters of the mooring lines are optimization constants.This informed the structuring of a multi-objective optimization model utilizing the Non-dominated Sorting Genetic Algorithm Ⅱ(NSGA-Ⅱ)algorithm.For the case of the IEA UMaine VolturnUS-S Reference Platform,Pareto fronts are discerned based on the above framework and delineate the relationship between competing motion response objectives.The efficacy of final designs is substantiated through the time-domain calculation model,which ensures that the motion responses in extreme sea conditions are superior to those of the initial design.展开更多
This paper describes a novel algorithm for fragile watermarking of 3D models. Fragile watermarking requires detection of even minute intentional changes to the 3D model along with the location of the change. This pose...This paper describes a novel algorithm for fragile watermarking of 3D models. Fragile watermarking requires detection of even minute intentional changes to the 3D model along with the location of the change. This poses a challenge since inserting random amount of watermark in all the vertices of the model would generally introduce perceptible distortion. The proposed algorithm overcomes this challenge by using genetic algorithm to modify every vertex location in the model so that there is no perceptible distortion. Various experimental results are used to justify the choice of the genetic algorithm design parameters. Experimental results also indicate that the proposed algorithm can accurately detect location of any mesh modification.展开更多
Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual conne...Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual connections of large-scale fractures.Notably,this model efficiently manages over 20,000 fractures without necessitating adjustments to the DFN geometry.All geometric analyses,such as identifying connected fractures,dividing the two-dimensional domain into closed loops,triangulating arbitrary loops,and refining triangular elements,are fully automated.The analysis processes are comprehensively introduced,and core algorithms,along with their pseudo-codes,are outlined and explained to assist readers in their programming endeavors.The accuracy of geometric analyses is validated through topological graphs representing the connection relationships between fractures.In practical application,the proposed model is employed to assess the water-sealing effectiveness of an underground storage cavern project.The analysis results indicate that the existing design scheme can effectively prevent the stored oil from leaking in the presence of both dense and sparse fractures.Furthermore,following extensive modification and optimization,the scale and precision of model computation suggest that the proposed model and developed codes can meet the requirements of engineering applications.展开更多
A solution to compute the optimal path based on a single-line-single-directional(SLSD)road network model is proposed.Unlike the traditional road network model,in the SLSD conceptual model,being single-directional an...A solution to compute the optimal path based on a single-line-single-directional(SLSD)road network model is proposed.Unlike the traditional road network model,in the SLSD conceptual model,being single-directional and single-line style,a road is no longer a linkage of road nodes but abstracted as a network node.Similarly,a road node is abstracted as the linkage of two ordered single-directional roads.This model can describe turn restrictions,circular roads,and other real scenarios usually described using a super-graph.Then a computing framework for optimal path finding(OPF)is presented.It is proved that classical Dijkstra and A algorithms can be directly used for OPF computing of any real-world road networks by transferring a super-graph to an SLSD network.Finally,using Singapore road network data,the proposed conceptual model and its corresponding optimal path finding algorithms are validated using a two-step optimal path finding algorithm with a pre-computing strategy based on the SLSD road network.展开更多
A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a force...A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.展开更多
The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979-1993) ECMWF reanalysis data as the initial ...The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979-1993) ECMWF reanalysis data as the initial and lateral boundary conditions and an off-line large-scale routing model (LRM). The LRM uses physical catchment and river channel information and allows streamflow to be predicted for large continental rivers with a 1°×1° spatial resolution. The results show that the PRECIS model can reproduce the general southeast to northwest gradient distribution of the precipitation over the Yellow River basin, The PRECIS- LRM model combination has the capability to simulate the seasonal and annual streamflow over the Yellow River basin. The simulated streamflow is generally coincident with the naturalized streamflow both in timing and in magnitude.展开更多
A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are runn...A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are running in parallel.The neural network algorithm is used to modify the adaptive noise filtering algorithm based on the mean value and variance of the"current"statistical model for maneuvering targets, and then the multiple model tracking algorithm of the multiple processing switch is used to improve the precision of tracking maneuvering targets.The modified algorithm is proved to be effective by simulation.展开更多
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in speci...Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.展开更多
Methods of improving seismic event locations were investigated as part of a research study aimed at reducing ground control safety hazards. Seismic event waveforms collected with a 23-station three-dimensional sensor ...Methods of improving seismic event locations were investigated as part of a research study aimed at reducing ground control safety hazards. Seismic event waveforms collected with a 23-station three-dimensional sensor array during longwall coal mining provide the data set used in the analyses. A spatially variable seismic velocity model is constructed using seismic event sources in a passive tomographic method. The resulting three-dimensional velocity model is used to relocate seismic event positions. An evolutionary optimization algorithm is implemented and used in both the velocity model development and in seeking improved event location solutions. Results obtained using the different velocity models are compared. The combination of the tomographic velocity model development and evolutionary search algorithm provides improvement to the event locations.展开更多
基金funding support from the National Natural Science Foundation of China(Grant Nos.42177136 and 52309126).
文摘The excavation of deep tunnels crossing faults is highly prone to triggering rockburst disasters,which has become a significant engineering issue.In this study,taking the fault-slip rockbursts from a deep tunnel in southwestern China as the engineering prototype,large-scale three-dimensional(3D)physical model tests were conducted on a 3D-printed complex geological model containing two faults.Based on the selfdeveloped 3D loading system and excavation device,the macroscopic failure of fault-slip rockbursts was simulated indoors.The stress,strain,and fracturing characteristics of the surrounding rock near the two faults were systematically evaluated during excavation and multistage loading.The test results effectively revealed the evolution and triggering mechanism of fault-slip rockbursts.After the excavation of a highstress tunnel,stress readjustment occurred.Owing to the presence of these two faults,stress continued to accumulate in the rock mass between them,leading to the accumulation of fractures.When the shear stress on a fault surface exceeded its shear strength,sudden fault slip and dislocation occurred,thus triggering rockbursts.Rockbursts occurred twice in the vault between the two faults,showing obvious intermittent characteristics.The rockburst pit was controlled by two faults.When the faults remained stable,tensile failure predominated in the surrounding rock.However,when the fault slip was triggered,shear failure in the surrounding rock increased.These findings provide valuable insights for enhancing the comprehension of fault-slip rockbursts.
文摘The research on optimization methods for constellation launch deployment strategies focused on the consideration of mission interval time constraints at the launch site.Firstly,a dynamic modeling of the constellation deployment process was established,and the relationship between the deployment window and the phase difference of the orbit insertion point,as well as the cost of phase adjustment after orbit insertion,was derived.Then,the combination of the constellation deployment position sequence was treated as a parameter,together with the sequence of satellite deployment intervals,as optimization variables,simplifying a highdimensional search problem within a wide range of dates to a finite-dimensional integer programming problem.An improved genetic algorithm with local search on deployment dates was introduced to optimize the launch deployment strategy.With the new description of the optimization variables,the total number of elements in the solution space was reduced by N orders of magnitude.Numerical simulation confirms that the proposed optimization method accelerates the convergence speed from hours to minutes.
基金supported by the National Natural Science Foundation of China[grant number 51775020]the Science Challenge Project[grant number.TZ2018007]+2 种基金the National Natural Science Foundation of China[grant number 62073009]the Postdoctoral Fellowship Program of CPSF[grant number GZC20233365]the Fundamental Research Funds for Central Universities[grant number JKF-20240559].
文摘Parameter extraction of photovoltaic(PV)models is crucial for the planning,optimization,and control of PV systems.Although some methods using meta-heuristic algorithms have been proposed to determine these parameters,the robustness of solutions obtained by these methods faces great challenges when the complexity of the PV model increases.The unstable results will affect the reliable operation and maintenance strategies of PV systems.In response to this challenge,an improved rime optimization algorithm with enhanced exploration and exploitation,termed TERIME,is proposed for robust and accurate parameter identification for various PV models.Specifically,the differential evolution mutation operator is integrated in the exploration phase to enhance the population diversity.Meanwhile,a new exploitation strategy incorporating randomization and neighborhood strategies simultaneously is developed to maintain the balance of exploitation width and depth.The TERIME algorithm is applied to estimate the optimal parameters of the single diode model,double diode model,and triple diode model combined with the Lambert-W function for three PV cell and module types including RTC France,Photo Watt-PWP 201 and S75.According to the statistical analysis in 100 runs,the proposed algorithm achieves more accurate and robust parameter estimations than other techniques to various PV models in varying environmental conditions.All of our source codes are publicly available at https://github.com/dirge1/TERIME.
基金supported by the Scientific and Technological Research Council of Turkey(TUBITAK)[grant no 122K637].
文摘The estimation of the probability of informed trading(PIN)model and its extensions poses significant challenges owing to various computational problems.To address these issues,we propose a novel estimation method called the expectation-conditional-maximization(ECM)algorithm,which can serve as an alternative to the existing methods for estimating PIN models.Our method provides optimal estimates for the original PIN model as well as two of its extensions:the multilayer PIN model and the adjusted PIN model,along with its restricted versions.Our results indicate that estimations using the ECM algorithm are generally faster,more accurate,and more memory-efficient than the standard methods used in the literature,making it a robust alternative.More importantly,the ECM algorithm is not limited to the models discussed and can be easily adapted to estimate future extensions of the PIN model.
基金supported in part by the National Natural Science Foundation of China(51775385)the Natural Science Foundation of Shanghai(23ZR1466000)+2 种基金the Shanghai Industrial Collaborative Science and Technology Innovation Project(2021-cyxt2-kj10)the Innovation Program of Shanghai Municipal Education Commission(202101070007E00098)Fundo para o Desenvolvimento das Ciencias e da Tecnologia(FDCT)(0147/2024/AFJ).
文摘When dealing with expensive multiobjective optimization problems,majority of existing surrogate-assisted evolutionary algorithms(SAEAs)generate solutions in decision space and screen candidate solutions mostly by using designed surrogate models.The generated solutions exhibit excessive randomness,which tends to reduce the likelihood of generating good-quality solutions and cause a long evolution to the optima.To improve SAEAs greatly,this work proposes an evolutionary algorithm based on surrogate and inverse surrogate models by 1)Employing a surrogate model in lieu of expensive(true)function evaluations;and 2)Proposing and using an inverse surrogate model to generate new solutions.By using the same training data but with its inputs and outputs being reversed,the latter is simple to train.It is then used to generate new vectors in objective space,which are mapped into decision space to obtain their corresponding solutions.Using a particular example,this work shows its advantages over existing SAEAs.The results of comparing it with state-of-the-art algorithms on expensive optimization problems show that it is highly competitive in both solution performance and efficiency.
基金the Deanship of Scientific Research at King Khalid University for funding this work through large group under grant number(GRP.2/663/46).
文摘Domain Generation Algorithms(DGAs)continue to pose a significant threat inmodernmalware infrastructures by enabling resilient and evasive communication with Command and Control(C&C)servers.Traditional detection methods-rooted in statistical heuristics,feature engineering,and shallow machine learning-struggle to adapt to the increasing sophistication,linguistic mimicry,and adversarial variability of DGA variants.The emergence of Large Language Models(LLMs)marks a transformative shift in this landscape.Leveraging deep contextual understanding,semantic generalization,and few-shot learning capabilities,LLMs such as BERT,GPT,and T5 have shown promising results in detecting both character-based and dictionary-based DGAs,including previously unseen(zeroday)variants.This paper provides a comprehensive and critical review of LLM-driven DGA detection,introducing a structured taxonomy of LLM architectures,evaluating the linguistic and behavioral properties of benchmark datasets,and comparing recent detection frameworks across accuracy,latency,robustness,and multilingual performance.We also highlight key limitations,including challenges in adversarial resilience,model interpretability,deployment scalability,and privacy risks.To address these gaps,we present a forward-looking research roadmap encompassing adversarial training,model compression,cross-lingual benchmarking,and real-time integration with SIEM/SOAR platforms.This survey aims to serve as a foundational resource for advancing the development of scalable,explainable,and operationally viable LLM-based DGA detection systems.
基金the National Natural Science Foundation of China (No. 50677062)the New Century Excellent Talents in Uni-versity of China (No. NCET-07-0745)the Natural Science Foundation of Zhejiang Province, China (No. R107062)
文摘In this paper, a mathematical model consisting of forward and backward models is built on parallel genetic algorithms (PGAs) for fault diagnosis in a transmission power system. A new method to reduce the scale of fault sections is developed in the forward model and the message passing interface (MPI) approach is chosen to parallel the genetic algorithms by global sin-gle-population master-slave method (GPGAs). The proposed approach is applied to a sample system consisting of 28 sections, 84 protective relays and 40 circuit breakers. Simulation results show that the new model based on GPGAs can achieve very fast computation in online applications of large-scale power systems.
基金supported by the Center for Mining,Electro-Mechanical research of Hanoi University of Mining and Geology(HUMG),Hanoi,Vietnamfinancially supported by the Hunan Provincial Department of Education General Project(19C1744)+1 种基金Hunan Province Science Foundation for Youth Scholars of China fund(2018JJ3510)the Innovation-Driven Project of Central South University(2020CX040)。
文摘Blasting is well-known as an effective method for fragmenting or moving rock in open-pit mines.To evaluate the quality of blasting,the size of rock distribution is used as a critical criterion in blasting operations.A high percentage of oversized rocks generated by blasting operations can lead to economic and environmental damage.Therefore,this study proposed four novel intelligent models to predict the size of rock distribution in mine blasting in order to optimize blasting parameters,as well as the efficiency of blasting operation in open mines.Accordingly,a nature-inspired algorithm(i.e.,firefly algorithm-FFA)and different machine learning algorithms(i.e.,gradient boosting machine(GBM),support vector machine(SVM),Gaussian process(GP),and artificial neural network(ANN))were combined for this aim,abbreviated as FFA-GBM,FFA-SVM,FFA-GP,and FFA-ANN,respectively.Subsequently,predicted results from the abovementioned models were compared with each other using three statistical indicators(e.g.,mean absolute error,root-mean-squared error,and correlation coefficient)and color intensity method.For developing and simulating the size of rock in blasting operations,136 blasting events with their images were collected and analyzed by the Split-Desktop software.In which,111 events were randomly selected for the development and optimization of the models.Subsequently,the remaining 25 blasting events were applied to confirm the accuracy of the proposed models.Herein,blast design parameters were regarded as input variables to predict the size of rock in blasting operations.Finally,the obtained results revealed that the FFA is a robust optimization algorithm for estimating rock fragmentation in bench blasting.Among the models developed in this study,FFA-GBM provided the highest accuracy in predicting the size of fragmented rocks.The other techniques(i.e.,FFA-SVM,FFA-GP,and FFA-ANN)yielded lower computational stability and efficiency.Hence,the FFA-GBM model can be used as a powerful and precise soft computing tool that can be applied to practical engineering cases aiming to improve the quality of blasting and rock fragmentation.
文摘Many biodynamic models have been derived using trial and error curve-fitting technique, such that the error between the computed and measured biodynamic response functions is minimum. This study developed a biomechanical model of the human body in a sitting posture without backrest for evaluating the vibration transmissibility and dynamic response to vertical vibration direction. In describing the human body motion, a three biomechanical models are discussed (two models are 4-DOF and one model 7-DOF). Optimization software based on stochastic techniques search methods, Genetic Algorithms (GAs), is employed to determine the human model parameters imposing some limit constraints on the model parameters. In addition, an objective function is formulated comprising the sum of errors between the computed and actual values (experimental data). The studied functions are the driving-point mechanical impedance, apparent mass and seat- to-head transmissibility functions. The optimization process increased the average goodness of fit and the results of studied functions became much closer to the target values (Experimental data). From the optimized model, the resonant frequencies of the driver parts computed on the basis of biodynamic response functions are found to be within close bounds to that expected for the human body.
文摘Direct soil temperature(ST)measurement is time-consuming and costly;thus,the use of simple and cost-effective machine learning(ML)tools is helpful.In this study,ML approaches,including KStar,instance-based K-nearest learning(IBK),and locally weighted learning(LWL),coupled with resampling algorithms of bagging(BA)and dagging(DA)(BA-IBK,BA-KStar,BA-LWL,DA-IBK,DA-KStar,and DA-LWL)were developed and tested for multi-step ahead(3,6,and 9 d ahead)ST forecasting.In addition,a linear regression(LR)model was used as a benchmark to evaluate the results.A dataset was established,with daily ST time-series at 5 and 50 cm soil depths in a farmland as models’output and meteorological data as models’input,including mean(T_(mean)),minimum(Tmin),and maximum(T_(max))air temperatures,evaporation(Eva),sunshine hours(SSH),and solar radiation(SR),which were collected at Isfahan Synoptic Station(Iran)for 13 years(1992–2005).Six different input combination scenarios were selected based on Pearson’s correlation coefficients between inputs and outputs and fed into the models.We used 70%of the data to train the models,with the remaining 30%used for model evaluation via multiple visual and quantitative metrics.Our?ndings showed that T_(mean)was the most effective input variable for ST forecasting in most of the developed models,while in some cases the combinations of variables,including T_(mean)and T_(max)and T_(mean),T_(max),Tmin,Eva,and SSH proved to be the best input combinations.Among the evaluated models,BA-KStar showed greater compatibility,while in most cases,BA-IBK and-LWL provided more accurate results,depending on soil depth.For the 5 cm soil depth,BA-KStar had superior performance(i.e.,Nash-Sutcliffe efficiency(NSE)=0.90,0.87,and 0.85 for 3,6,and 9 d ahead forecasting,respectively);for the 50 cm soil depth,DA-KStar outperformed the other models(i.e.,NSE=0.88,0.89,and 0.89 for 3,6,and 9 d ahead forecasting,respectively).The results con?rmed that all hybrid models had higher prediction capabilities than the LR model.
基金support by the Open Project of Xiangjiang Laboratory(22XJ02003)the University Fundamental Research Fund(23-ZZCX-JDZ-28,ZK21-07)+5 种基金the National Science Fund for Outstanding Young Scholars(62122093)the National Natural Science Foundation of China(72071205)the Hunan Graduate Research Innovation Project(CX20230074)the Hunan Natural Science Foundation Regional Joint Project(2023JJ50490)the Science and Technology Project for Young and Middle-aged Talents of Hunan(2023TJZ03)the Science and Technology Innovation Program of Humnan Province(2023RC1002).
文摘Sparse large-scale multi-objective optimization problems(SLMOPs)are common in science and engineering.However,the large-scale problem represents the high dimensionality of the decision space,requiring algorithms to traverse vast expanse with limited computational resources.Furthermore,in the context of sparse,most variables in Pareto optimal solutions are zero,making it difficult for algorithms to identify non-zero variables efficiently.This paper is dedicated to addressing the challenges posed by SLMOPs.To start,we introduce innovative objective functions customized to mine maximum and minimum candidate sets.This substantial enhancement dramatically improves the efficacy of frequent pattern mining.In this way,selecting candidate sets is no longer based on the quantity of nonzero variables they contain but on a higher proportion of nonzero variables within specific dimensions.Additionally,we unveil a novel approach to association rule mining,which delves into the intricate relationships between non-zero variables.This novel methodology aids in identifying sparse distributions that can potentially expedite reductions in the objective function value.We extensively tested our algorithm across eight benchmark problems and four real-world SLMOPs.The results demonstrate that our approach achieves competitive solutions across various challenges.
基金financially supported by the National Natural Science Foundation of China(Grant No.52371261)the Science and Technology Projects of Liaoning Province(Grant No.2023011352-JH1/110).
文摘This study delineates the development of the optimization framework for the preliminary design phase of Floating Offshore Wind Turbines(FOWTs),and the central challenge addressed is the optimization of the FOWT platform dimensional parameters in relation to motion responses.Although the three-dimensional potential flow(TDPF)panel method is recognized for its precision in calculating FOWT motion responses,its computational intensity necessitates an alternative approach for efficiency.Herein,a novel application of varying fidelity frequency-domain computational strategies is introduced,which synthesizes the strip theory with the TDPF panel method to strike a balance between computational speed and accuracy.The Co-Kriging algorithm is employed to forge a surrogate model that amalgamates these computational strategies.Optimization objectives are centered on the platform’s motion response in heave and pitch directions under general sea conditions.The steel usage,the range of design variables,and geometric considerations are optimization constraints.The angle of the pontoons,the number of columns,the radius of the central column and the parameters of the mooring lines are optimization constants.This informed the structuring of a multi-objective optimization model utilizing the Non-dominated Sorting Genetic Algorithm Ⅱ(NSGA-Ⅱ)algorithm.For the case of the IEA UMaine VolturnUS-S Reference Platform,Pareto fronts are discerned based on the above framework and delineate the relationship between competing motion response objectives.The efficacy of final designs is substantiated through the time-domain calculation model,which ensures that the motion responses in extreme sea conditions are superior to those of the initial design.
文摘This paper describes a novel algorithm for fragile watermarking of 3D models. Fragile watermarking requires detection of even minute intentional changes to the 3D model along with the location of the change. This poses a challenge since inserting random amount of watermark in all the vertices of the model would generally introduce perceptible distortion. The proposed algorithm overcomes this challenge by using genetic algorithm to modify every vertex location in the model so that there is no perceptible distortion. Various experimental results are used to justify the choice of the genetic algorithm design parameters. Experimental results also indicate that the proposed algorithm can accurately detect location of any mesh modification.
基金sponsored by the General Program of the National Natural Science Foundation of China(Grant Nos.52079129 and 52209148)the Hubei Provincial General Fund,China(Grant No.2023AFB567)。
文摘Analyzing rock mass seepage using the discrete fracture network(DFN)flow model poses challenges when dealing with complex fracture networks.This paper presents a novel DFN flow model that incorporates the actual connections of large-scale fractures.Notably,this model efficiently manages over 20,000 fractures without necessitating adjustments to the DFN geometry.All geometric analyses,such as identifying connected fractures,dividing the two-dimensional domain into closed loops,triangulating arbitrary loops,and refining triangular elements,are fully automated.The analysis processes are comprehensively introduced,and core algorithms,along with their pseudo-codes,are outlined and explained to assist readers in their programming endeavors.The accuracy of geometric analyses is validated through topological graphs representing the connection relationships between fractures.In practical application,the proposed model is employed to assess the water-sealing effectiveness of an underground storage cavern project.The analysis results indicate that the existing design scheme can effectively prevent the stored oil from leaking in the presence of both dense and sparse fractures.Furthermore,following extensive modification and optimization,the scale and precision of model computation suggest that the proposed model and developed codes can meet the requirements of engineering applications.
基金The National Key Technology R&D Program of China during the 11th Five Year Plan Period(No.2008BAJ11B01)
文摘A solution to compute the optimal path based on a single-line-single-directional(SLSD)road network model is proposed.Unlike the traditional road network model,in the SLSD conceptual model,being single-directional and single-line style,a road is no longer a linkage of road nodes but abstracted as a network node.Similarly,a road node is abstracted as the linkage of two ordered single-directional roads.This model can describe turn restrictions,circular roads,and other real scenarios usually described using a super-graph.Then a computing framework for optimal path finding(OPF)is presented.It is proved that classical Dijkstra and A algorithms can be directly used for OPF computing of any real-world road networks by transferring a super-graph to an SLSD network.Finally,using Singapore road network data,the proposed conceptual model and its corresponding optimal path finding algorithms are validated using a two-step optimal path finding algorithm with a pre-computing strategy based on the SLSD road network.
基金supported by the Ministry of Trade,Industry & Energy(MOTIE,Korea) under Industrial Technology Innovation Program (No.10063424,'development of distant speech recognition and multi-task dialog processing technologies for in-door conversational robots')
文摘A Long Short-Term Memory(LSTM) Recurrent Neural Network(RNN) has driven tremendous improvements on an acoustic model based on Gaussian Mixture Model(GMM). However, these models based on a hybrid method require a forced aligned Hidden Markov Model(HMM) state sequence obtained from the GMM-based acoustic model. Therefore, it requires a long computation time for training both the GMM-based acoustic model and a deep learning-based acoustic model. In order to solve this problem, an acoustic model using CTC algorithm is proposed. CTC algorithm does not require the GMM-based acoustic model because it does not use the forced aligned HMM state sequence. However, previous works on a LSTM RNN-based acoustic model using CTC used a small-scale training corpus. In this paper, the LSTM RNN-based acoustic model using CTC is trained on a large-scale training corpus and its performance is evaluated. The implemented acoustic model has a performance of 6.18% and 15.01% in terms of Word Error Rate(WER) for clean speech and noisy speech, respectively. This is similar to a performance of the acoustic model based on the hybrid method.
文摘The streamflow over the Yellow River basin is simulated using the PRECIS (Providing REgional Climates for Impacts Studies) regional climate model driven by 15-year (1979-1993) ECMWF reanalysis data as the initial and lateral boundary conditions and an off-line large-scale routing model (LRM). The LRM uses physical catchment and river channel information and allows streamflow to be predicted for large continental rivers with a 1°×1° spatial resolution. The results show that the PRECIS model can reproduce the general southeast to northwest gradient distribution of the precipitation over the Yellow River basin, The PRECIS- LRM model combination has the capability to simulate the seasonal and annual streamflow over the Yellow River basin. The simulated streamflow is generally coincident with the naturalized streamflow both in timing and in magnitude.
文摘A multiple model tracking algorithm based on neural network and multiple-process noise soft-switching for maneuvering targets is presented.In this algorithm, the"current"statistical model and neural network are running in parallel.The neural network algorithm is used to modify the adaptive noise filtering algorithm based on the mean value and variance of the"current"statistical model for maneuvering targets, and then the multiple model tracking algorithm of the multiple processing switch is used to improve the precision of tracking maneuvering targets.The modified algorithm is proved to be effective by simulation.
基金supported by the National Key R&D Program of China(No.2021YFB0301200)National Natural Science Foundation of China(No.62025208).
文摘Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning paradigm.While this approach allows models to specialize in specific tasks with reduced training costs,the substantial memory requirements during fine-tuning present a barrier to broader deployment.Parameter-Efficient Fine-Tuning(PEFT)techniques,such as Low-Rank Adaptation(LoRA),and parameter quantization methods have emerged as solutions to address these challenges by optimizing memory usage and computational efficiency.Among these,QLoRA,which combines PEFT and quantization,has demonstrated notable success in reducing memory footprints during fine-tuning,prompting the development of various QLoRA variants.Despite these advancements,the quantitative impact of key variables on the fine-tuning performance of quantized LLMs remains underexplored.This study presents a comprehensive analysis of these key variables,focusing on their influence across different layer types and depths within LLM architectures.Our investigation uncovers several critical findings:(1)Larger layers,such as MLP layers,can maintain performance despite reductions in adapter rank,while smaller layers,like self-attention layers,aremore sensitive to such changes;(2)The effectiveness of balancing factors depends more on specific values rather than layer type or depth;(3)In quantization-aware fine-tuning,larger layers can effectively utilize smaller adapters,whereas smaller layers struggle to do so.These insights suggest that layer type is a more significant determinant of fine-tuning success than layer depth when optimizing quantized LLMs.Moreover,for the same discount of trainable parameters,reducing the trainable parameters in a larger layer is more effective in preserving fine-tuning accuracy than in a smaller one.This study provides valuable guidance for more efficient fine-tuning strategies and opens avenues for further research into optimizing LLM fine-tuning in resource-constrained environments.
文摘Methods of improving seismic event locations were investigated as part of a research study aimed at reducing ground control safety hazards. Seismic event waveforms collected with a 23-station three-dimensional sensor array during longwall coal mining provide the data set used in the analyses. A spatially variable seismic velocity model is constructed using seismic event sources in a passive tomographic method. The resulting three-dimensional velocity model is used to relocate seismic event positions. An evolutionary optimization algorithm is implemented and used in both the velocity model development and in seeking improved event location solutions. Results obtained using the different velocity models are compared. The combination of the tomographic velocity model development and evolutionary search algorithm provides improvement to the event locations.