Selective Laser Melting(SLM),an advanced metal additive manufacturing technology,offers high precision and personalized customization advantages.However,selecting reasonable SLM parameters is challenging due to comple...Selective Laser Melting(SLM),an advanced metal additive manufacturing technology,offers high precision and personalized customization advantages.However,selecting reasonable SLM parameters is challenging due to complex relationships.This study proposes a method for identifying the optimal process window by combining the simulation model with an optimization algorithm.JAYA is guided by the principle of preferential behavior towards best solutions and avoidance of worst ones,but it is prone to premature convergence thus leading to insufficient global search.To overcome limitations,this research proposes a Differential Evolution-framed JAYA algorithm(DEJAYA).DEJAYA incorporates four key enhancements to improve the flexibility of the original algorithm,which include DE framework design,horizontal crossover operator,longitudinal crossover operator,and global greedy strategy.The effectiveness of DEJAYA is rigorously evaluated by a suite of 23 distinct benchmark functions.Furthermore,the numerical simulation establishes AlSi10Mg single-track formation models,and DEJAYA successfully identified the optimal process window for this problem.Experimental results validate that DEJAYA effectively guides SLM parameter selection for AlSi10Mg.展开更多
Data serves as the foundation for training and testing machine learning and artificial intelligencemodels.The most fundamental part of data is its attributes or features.The feature set size changes from one dataset t...Data serves as the foundation for training and testing machine learning and artificial intelligencemodels.The most fundamental part of data is its attributes or features.The feature set size changes from one dataset to another.Only the relevant features contributemeaningfully to classificationaccuracy.The presence of irrelevant features reduces the system’s effectiveness.Classification performance often deteriorates on high-dimensional datasets due to the large search space.Thus,one of the significant obstacles affecting the performance of the learning process in the majority of machine learning and data mining techniques is the dimensionality of the datasets.Feature selection(FS)is an effective preprocessing step in classification tasks.The aim of applying FS is to exclude redundant and unrelated features while retaining the most informative ones to optimize classification capability and compress computational complexity.In this paper,a novel hybrid binary metaheuristic algorithm,termed hSC-FPA,is proposed by hybridizing the Flower Pollination Algorithm(FPA)and the Sine Cosine Algorithm(SCA).Hybridization controls the exploration capacity of SCA and the exploitation behavior of FPA to maintain a balanced search process.SCA guides the global search in the early iterations,while FPA’s local pollination refines promising solutions in later stages.A binary conversion mechanism using a threshold function is implemented to handle the discrete nature of the feature selection problem.The functionality of the proposed hSC-FPA is authenticated on fourteen standard datasets from the UCI repository using the K-Nearest Neighbors(K-NN)classifier.Experimental results are benchmarked against the standalone SCA and FPA algorithms.The hSC-FPA consistently achieves higher classification accuracy,selects a more compact feature subset,and demonstrates superior convergence behavior.These findings support the stability and outperformance of the hybrid feature selection method presented.展开更多
Existing feature selection methods for intrusion detection systems in the Industrial Internet of Things often suffer from local optimality and high computational complexity.These challenges hinder traditional IDS from...Existing feature selection methods for intrusion detection systems in the Industrial Internet of Things often suffer from local optimality and high computational complexity.These challenges hinder traditional IDS from effectively extracting features while maintaining detection accuracy.This paper proposes an industrial Internet ofThings intrusion detection feature selection algorithm based on an improved whale optimization algorithm(GSLDWOA).The aim is to address the problems that feature selection algorithms under high-dimensional data are prone to,such as local optimality,long detection time,and reduced accuracy.First,the initial population’s diversity is increased using the Gaussian Mutation mechanism.Then,Non-linear Shrinking Factor balances global exploration and local development,avoiding premature convergence.Lastly,Variable-step Levy Flight operator and Dynamic Differential Evolution strategy are introduced to improve the algorithm’s search efficiency and convergence accuracy in highdimensional feature space.Experiments on the NSL-KDD and WUSTL-IIoT-2021 datasets demonstrate that the feature subset selected by GSLDWOA significantly improves detection performance.Compared to the traditional WOA algorithm,the detection rate and F1-score increased by 3.68%and 4.12%.On the WUSTL-IIoT-2021 dataset,accuracy,recall,and F1-score all exceed 99.9%.展开更多
In recent years,feature selection(FS)optimization of high-dimensional gene expression data has become one of the most promising approaches for cancer prediction and classification.This work reviews FS and classificati...In recent years,feature selection(FS)optimization of high-dimensional gene expression data has become one of the most promising approaches for cancer prediction and classification.This work reviews FS and classification methods that utilize evolutionary algorithms(EAs)for gene expression profiles in cancer or medical applications based on research motivations,challenges,and recommendations.Relevant studies were retrieved from four major academic databases-IEEE,Scopus,Springer,and ScienceDirect-using the keywords‘cancer classification’,‘optimization’,‘FS’,and‘gene expression profile’.A total of 67 papers were finally selected with key advancements identified as follows:(1)The majority of papers(44.8%)focused on developing algorithms and models for FS and classification.(2)The second category encompassed studies on biomarker identification by EAs,including 20 papers(30%).(3)The third category comprised works that applied FS to cancer data for decision support system purposes,addressing high-dimensional data and the formulation of chromosome length.These studies accounted for 12%of the total number of studies.(4)The remaining three papers(4.5%)were reviews and surveys focusing on models and developments in prediction and classification optimization for cancer classification under current technical conditions.This review highlights the importance of optimizing FS in EAs to manage high-dimensional data effectively.Despite recent advancements,significant limitations remain:the dynamic formulation of chromosome length remains an underexplored area.Thus,further research is needed on dynamic-length chromosome techniques for more sophisticated biomarker gene selection techniques.The findings suggest that further advancements in dynamic chromosome length formulations and adaptive algorithms could enhance cancer classification accuracy and efficiency.展开更多
Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software ...Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software defect prediction can be effectively performed using traditional features,but there are some redundant or irrelevant features in them(the presence or absence of this feature has little effect on the prediction results).These problems can be solved using feature selection.However,existing feature selection methods have shortcomings such as insignificant dimensionality reduction effect and low classification accuracy of the selected optimal feature subset.In order to reduce the impact of these shortcomings,this paper proposes a new feature selection method Cubic TraverseMa Beluga whale optimization algorithm(CTMBWO)based on the improved Beluga whale optimization algorithm(BWO).The goal of this study is to determine how well the CTMBWO can extract the features that are most important for correctly predicting software defects,improve the accuracy of fault prediction,reduce the number of the selected feature and mitigate the risk of overfitting,thereby achieving more efficient resource utilization and better distribution of test workload.The CTMBWO comprises three main stages:preprocessing the dataset,selecting relevant features,and evaluating the classification performance of the model.The novel feature selection method can effectively improve the performance of SDP.This study performs experiments on two software defect datasets(PROMISE,NASA)and shows the method’s classification performance using four detailed evaluation metrics,Accuracy,F1-score,MCC,AUC and Recall.The results indicate that the approach presented in this paper achieves outstanding classification performance on both datasets and has significant improvement over the baseline models.展开更多
Frequency selective surface (FSS) is a two-dimensional periodic structure which has promiaent characteristics of bandpass or bandbloek when interacting with electromagnetic waves. In this paper, the thickness, the d...Frequency selective surface (FSS) is a two-dimensional periodic structure which has promiaent characteristics of bandpass or bandbloek when interacting with electromagnetic waves. In this paper, the thickness, the dielectric constant, the element graph and the arrangement periodicity of an FSS medium are investigated by Genetic Algorithm (GA) when an electromagnetic wave is incident on the FSS at a wide angle, and an optimized FSS structure and transmission characteristics are obtained. The results show that the optimized structure has better stability in relation to incident angle of electromagnetic wave and preserves the stability of centre frequency even at an incident angle as large as 80°, thereby laying the foundation for the application of FSS to curved surfaces at wide angles.展开更多
Multi-objective Evolutionary Algorithm (MOEA) is becoming a hot research area and quite a few aspects of MOEAs have been studied and discussed. However there are still few literatures discussing the roles of search an...Multi-objective Evolutionary Algorithm (MOEA) is becoming a hot research area and quite a few aspects of MOEAs have been studied and discussed. However there are still few literatures discussing the roles of search and selection operators in MOEAs. This paper studied their roles by solving a case of discrete Multi-objective Optimization Problem (MOP): Multi-objective TSP with a new MOEA. In the new MOEA, We adopt an efficient search operator, which has the properties of both crossover and mutation, to generate the new individuals and chose two selection operators: Family Competition and Population Competition with probabilities to realize selection. The simulation experiments showed that this new MOEA could get good uniform solutions representing the Pareto Front and outperformed SPEA in almost every simulation run on this problem. Furthermore, we analyzed its convergence property using finite Markov chain and proved that it could converge to Pareto Front with probability 1. We also find that the convergence property of MOEAs has much relationship with search and selection operators.展开更多
In this paper,negative selection and genetic algorithms are combined and an improved bi-objective optimization scheme is presented to achieve optimized negative selection algorithm detectors.The main aim of the optima...In this paper,negative selection and genetic algorithms are combined and an improved bi-objective optimization scheme is presented to achieve optimized negative selection algorithm detectors.The main aim of the optimal detector generation technique is maximal nonself space coverage with reduced number of diversified detectors.Conventionally,researchers opted clonal selection based optimization methods to achieve the maximal nonself coverage milestone;however,detectors cloning process results in generation of redundant similar detectors and inefficient detector distribution in nonself space.In approach proposed in the present paper,the maximal nonself space coverage is associated with bi-objective optimization criteria including minimization of the detector overlap and maximization of the diversity factor of the detectors.In the proposed methodology,a novel diversity factorbased approach is presented to obtain diversified detector distribution in the nonself space.The concept of diversified detector distribution is studied for detector coverage with 2-dimensional pentagram and spiral self-patterns.Furthermore,the feasibility of the developed fault detection methodology is tested the fault detection of induction motor inner race and outer race bearings.展开更多
The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamic...The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.展开更多
Multi-Objective Evolutionary Algorithms(MOEAs)have significantly advanced the domain of MultiObjective Optimization(MOO),facilitating solutions for complex problems with multiple conflicting objectives.This review exp...Multi-Objective Evolutionary Algorithms(MOEAs)have significantly advanced the domain of MultiObjective Optimization(MOO),facilitating solutions for complex problems with multiple conflicting objectives.This review explores the historical development of MOEAs,beginning with foundational concepts in multi-objective optimization,basic types of MOEAs,and the evolution of Pareto-based selection and niching methods.Further advancements,including decom-position-based approaches and hybrid algorithms,are discussed.Applications are analyzed in established domains such as engineering and economics,as well as in emerging fields like advanced analytics and machine learning.The significance of MOEAs in addressing real-world problems is emphasized,highlighting their role in facilitating informed decision-making.Finally,the development trajectory of MOEAs is compared with evolutionary processes,offering insights into their progress and future potential.展开更多
Casing damage resulting from sand production in unconsolidated sandstone reservoirs can significantly impact the average production of oil wells.However,the prediction task remains challenging due to the complex damag...Casing damage resulting from sand production in unconsolidated sandstone reservoirs can significantly impact the average production of oil wells.However,the prediction task remains challenging due to the complex damage mechanism caused by sand production.This paper presents an innovative approach that combines feature selection(FS)with boosting algorithms to accurately predict casing damage in unconsolidated sandstone reservoirs.A novel TriScore FS technique is developed,combining mRMR,Random Forest,and F-test.The approach integrates three distinct feature selection approaches—TriScore,wrapper,and hybrid TriScore-wrapper and four interpretable Boosting models(AdaBoost,XGBoost,LightGBM,CatBoost).Moreover,shapley additive explanations(SHAP)was used to identify the most significant features across engineering,geological,and production features.The CatBoost model,using the Hybrid TriScore-rapper G_(1)G_(2)FS method,showed exceptional performance in analyzing data from the Gangxi Oilfield.It achieved the highestaccuracy(95.5%)and recall rate(89.7%)compared to other tested models.Casing service time,casing wall thickness,and perforation density were selected as the top three most important features.This framework enhances predictive robustness and is an effective tool for policymakers and energy analysts,confirming its capability to deliver reliable casing damage forecasts.展开更多
In recent years, particle swarm optimization (PSO) has received widespread attention in feature selection due to its simplicity and potential for global search. However, in traditional PSO, particles primarily update ...In recent years, particle swarm optimization (PSO) has received widespread attention in feature selection due to its simplicity and potential for global search. However, in traditional PSO, particles primarily update based on two extreme values: personal best and global best, which limits the diversity of information. Ideally, particles should learn from multiple advantageous particles to enhance interactivity and optimization efficiency. Accordingly, this paper proposes a PSO that simulates the evolutionary dynamics of species survival in mountain peak ecology (PEPSO) for feature selection. Based on the pyramid topology, the algorithm simulates the features of mountain peak ecology in nature and the competitive-cooperative strategies among species. According to the principles of the algorithm, the population is first adaptively divided into many subgroups based on the fitness level of particles. Then, particles within each subgroup are divided into three different types based on their evolutionary levels, employing different adaptive inertia weight rules and dynamic learning mechanisms to define distinct learning modes. Consequently, all particles play their respective roles in promoting the global optimization performance of the algorithm, similar to different species in the ecological pattern of mountain peaks. Experimental validation of the PEPSO performance was conducted on 18 public datasets. The experimental results demonstrate that the PEPSO outperforms other PSO variant-based feature selection methods and mainstream feature selection methods based on intelligent optimization algorithms in terms of overall performance in global search capability, classification accuracy, and reduction of feature space dimensions. Wilcoxon signed-rank test also confirms the excellent performance of the PEPSO.展开更多
This paper addresses the problem of selecting a route for every pair of communicating nodes in a virtual circuit data network in order to minimize the average delay encountered by messages. The problem was previously ...This paper addresses the problem of selecting a route for every pair of communicating nodes in a virtual circuit data network in order to minimize the average delay encountered by messages. The problem was previously modeled as a network of M/M/1 queues. Agenetic algorithm to solve this problem is presented. Extensive computational results across a variety of networks are reported. These results indicate that the presented solution procedure outperforms the other methods in the literature and is effective for a wide range of traffic loads.展开更多
With the birth of Software-Defined Networking(SDN),integration of both SDN and traditional architectures becomes the development trend of computer networks.Network intrusion detection faces challenges in dealing with ...With the birth of Software-Defined Networking(SDN),integration of both SDN and traditional architectures becomes the development trend of computer networks.Network intrusion detection faces challenges in dealing with complex attacks in SDN environments,thus to address the network security issues from the viewpoint of Artificial Intelligence(AI),this paper introduces the Crayfish Optimization Algorithm(COA)to the field of intrusion detection for both SDN and traditional network architectures,and based on the characteristics of the original COA,an Improved Crayfish Optimization Algorithm(ICOA)is proposed by integrating strategies of elite reverse learning,Levy flight,crowding factor and parameter modification.The ICOA is then utilized for AI-integrated feature selection of intrusion detection for both SDN and traditional network architectures,to reduce the dimensionality of the data and improve the performance of network intrusion detection.Finally,the performance evaluation is performed by testing not only the NSL-KDD dataset and the UNSW-NB 15 dataset for traditional networks but also the InSDN dataset for SDN-based networks.Experimental results show that ICOA improves the accuracy by 0.532%and 2.928%respectively compared with GWO and COA in traditional networks.In SDN networks,the accuracy of ICOA is 0.25%and 0.3%higher than COA and PSO.These findings collectively indicate that AI-integrated feature selection based on the proposed ICOA can promote network intrusion detection for both SDN and traditional architectures.展开更多
In this paper, we analyze the features and distinctions of 6 classical algorithms: greedy algorithm (G), greedy evolution algorithm (GE), heuristics algorithm (H), greedy heuristic G (GRE), integer linear pro...In this paper, we analyze the features and distinctions of 6 classical algorithms: greedy algorithm (G), greedy evolution algorithm (GE), heuristics algorithm (H), greedy heuristic G (GRE), integer linear programming algorithm (ILP) and genetic algorithm (GA) to ensure the main influencing factors-the performance of algorithms and the running time of algorithms. What's more, we would not only present a research design that aims at gaining deeper understanding about the algorithm classification and its function as well as their distinction, but also make an empirical study in order to obtain a practical range standard that can guide the selection of reduction algorithms. When the size of a test object (product of test requirements and test cases) is smaller than 2000×2000, G algorithm is the commonly recommended algorithm. With the growth of test size, the usage of GE and GRE becomes more general.展开更多
The neutron supermirror is an important neutron optical device that can significantly improve the efficiency of neutron transport in neutron guides and has been widely used in research neutron sources.Three types of a...The neutron supermirror is an important neutron optical device that can significantly improve the efficiency of neutron transport in neutron guides and has been widely used in research neutron sources.Three types of algorithms,including approximately ten algorithms,have been developed for designing high-efficiency supermirror structures.In addition to its applications in neutron guides,in recent years,the use of neutron supermirrors in neutronfocusing mirrors has been proposed to advance the development of neutron scattering and neutron imaging instruments,especially those at compact neutron sources.In this new application scenario,the performance of supermirrors strongly affects the instrument performance;therefore,a careful evaluation of the design algorithms is needed.In this study,we examine two issues:the effect of nonuniform film thickness distribution on a curved substrate and the effect of the specific neutron intensity distribution on the performance of neutron supermirrors designed using existing algorithms.The effect of film thickness nonuniformity is found to be relatively insignificant,whereas the effect of the neutron intensity distribution over Q(where Q is the magnitude of the scattering vector of incident neutrons)is considerable.Selection diagrams that show the best design algorithm under different conditions are obtained from these results.When the intensity distribution is not considered,empirical algorithms can obtain the highest average reflectivity,whereas discrete algorithms perform best when the intensity distribution is taken into account.The reasons for the differences in performance between algorithms are also discussed.These findings provide a reference for selecting design algorithms for supermirrors for use in neutron optical devices with unique geometries and can be very helpful for improving the performance of focusing supermirror-based instruments.展开更多
Bioactive compounds in plants,which can be synthesized using N-arylationmethods such as the Buchwald-Hartwig reaction,are essential in drug discovery for their pharmacological effects.Important descriptors are necessa...Bioactive compounds in plants,which can be synthesized using N-arylationmethods such as the Buchwald-Hartwig reaction,are essential in drug discovery for their pharmacological effects.Important descriptors are necessary for the estimation of yields in these reactions.This study explores ten metaheuristic algorithms for descriptor selection and model a voting ensemble for evaluation.The algorithms were evaluated based on computational time and the number of selected descriptors.Analyses show that robust performance is obtained with more descriptors,compared to cases where fewer descriptors are selected.The essential descriptor was deduced based on the frequency of occurrence within the 50 extracted data subsets,and better performance was achieved with the voting ensemble than other algorithms with RMSE of 6.4270 and R^(2) of 0.9423.The results and deductions from this study can be readily applied in the decision-making process of chemical synthesis by saving the computational cost associated with initial descriptor selection for yield estimation.The ensemble model has also shown robust performance in its yield estimation ability and efficiency.展开更多
The recycling of glass bottles can reduce the consumption of resources and contribute to environmental protection.At present,the classification of recycled glass bottles is difficult due to the many differences in spe...The recycling of glass bottles can reduce the consumption of resources and contribute to environmental protection.At present,the classification of recycled glass bottles is difficult due to the many differences in specifications and models.This paper proposes a classification algorithm for glass bottles that is divided into two stages,namely the extraction of candidate regions and the classification of classifiers.In the candidate region extraction stage,aiming at the problem of the large time overhead caused by the use of the SIFT(scale-invariant feature transform)descriptor in SS(selective search),an improved feature of HLSN(Haar-like based on SPP-Net)is proposed.An integral graph is introduced to accelerate the process of forming an HBSN vector,which overcomes the problem of repeated texture feature calculation in overlapping regions by SS.In the classification stage,the improved SS algorithm is used to extract target regions.The target regions are merged using a non-maximum suppression algorithm according to the classification scores of the respective regions,and the merged regions are classified using the trained classifier.Experiments demonstrate that,compared with the original SS,the improved SS algorithm increases the calculation speed by 13.8%,and its classification accuracy is 89.4%.Additionally,the classification algorithm for glass bottles has a certain resistance to noise.展开更多
Feature selection is an active area in data mining research and development. It consists of efforts and contributions from a wide variety of communities, including statistics, machine learning, and pattern recognition...Feature selection is an active area in data mining research and development. It consists of efforts and contributions from a wide variety of communities, including statistics, machine learning, and pattern recognition. The diversity, on one hand, equips us with many methods and tools. On the other hand, the profusion of options causes confusion.This paper reviews various feature selection methods and identifies research challenges that are at the forefront of this exciting area.展开更多
Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant chal...Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.展开更多
文摘Selective Laser Melting(SLM),an advanced metal additive manufacturing technology,offers high precision and personalized customization advantages.However,selecting reasonable SLM parameters is challenging due to complex relationships.This study proposes a method for identifying the optimal process window by combining the simulation model with an optimization algorithm.JAYA is guided by the principle of preferential behavior towards best solutions and avoidance of worst ones,but it is prone to premature convergence thus leading to insufficient global search.To overcome limitations,this research proposes a Differential Evolution-framed JAYA algorithm(DEJAYA).DEJAYA incorporates four key enhancements to improve the flexibility of the original algorithm,which include DE framework design,horizontal crossover operator,longitudinal crossover operator,and global greedy strategy.The effectiveness of DEJAYA is rigorously evaluated by a suite of 23 distinct benchmark functions.Furthermore,the numerical simulation establishes AlSi10Mg single-track formation models,and DEJAYA successfully identified the optimal process window for this problem.Experimental results validate that DEJAYA effectively guides SLM parameter selection for AlSi10Mg.
基金supported by a research grant from Lahore College for Women University(LCWU),Lahore,Pakistan.
文摘Data serves as the foundation for training and testing machine learning and artificial intelligencemodels.The most fundamental part of data is its attributes or features.The feature set size changes from one dataset to another.Only the relevant features contributemeaningfully to classificationaccuracy.The presence of irrelevant features reduces the system’s effectiveness.Classification performance often deteriorates on high-dimensional datasets due to the large search space.Thus,one of the significant obstacles affecting the performance of the learning process in the majority of machine learning and data mining techniques is the dimensionality of the datasets.Feature selection(FS)is an effective preprocessing step in classification tasks.The aim of applying FS is to exclude redundant and unrelated features while retaining the most informative ones to optimize classification capability and compress computational complexity.In this paper,a novel hybrid binary metaheuristic algorithm,termed hSC-FPA,is proposed by hybridizing the Flower Pollination Algorithm(FPA)and the Sine Cosine Algorithm(SCA).Hybridization controls the exploration capacity of SCA and the exploitation behavior of FPA to maintain a balanced search process.SCA guides the global search in the early iterations,while FPA’s local pollination refines promising solutions in later stages.A binary conversion mechanism using a threshold function is implemented to handle the discrete nature of the feature selection problem.The functionality of the proposed hSC-FPA is authenticated on fourteen standard datasets from the UCI repository using the K-Nearest Neighbors(K-NN)classifier.Experimental results are benchmarked against the standalone SCA and FPA algorithms.The hSC-FPA consistently achieves higher classification accuracy,selects a more compact feature subset,and demonstrates superior convergence behavior.These findings support the stability and outperformance of the hybrid feature selection method presented.
基金supported by the Major Science and Technology Programs in Henan Province(No.241100210100)Henan Provincial Science and Technology Research Project(No.252102211085,No.252102211105)+3 种基金Endogenous Security Cloud Network Convergence R&D Center(No.602431011PQ1)The Special Project for Research and Development in Key Areas of Guangdong Province(No.2021ZDZX1098)The Stabilization Support Program of Science,Technology and Innovation Commission of Shenzhen Municipality(No.20231128083944001)The Key scientific research projects of Henan higher education institutions(No.24A520042).
文摘Existing feature selection methods for intrusion detection systems in the Industrial Internet of Things often suffer from local optimality and high computational complexity.These challenges hinder traditional IDS from effectively extracting features while maintaining detection accuracy.This paper proposes an industrial Internet ofThings intrusion detection feature selection algorithm based on an improved whale optimization algorithm(GSLDWOA).The aim is to address the problems that feature selection algorithms under high-dimensional data are prone to,such as local optimality,long detection time,and reduced accuracy.First,the initial population’s diversity is increased using the Gaussian Mutation mechanism.Then,Non-linear Shrinking Factor balances global exploration and local development,avoiding premature convergence.Lastly,Variable-step Levy Flight operator and Dynamic Differential Evolution strategy are introduced to improve the algorithm’s search efficiency and convergence accuracy in highdimensional feature space.Experiments on the NSL-KDD and WUSTL-IIoT-2021 datasets demonstrate that the feature subset selected by GSLDWOA significantly improves detection performance.Compared to the traditional WOA algorithm,the detection rate and F1-score increased by 3.68%and 4.12%.On the WUSTL-IIoT-2021 dataset,accuracy,recall,and F1-score all exceed 99.9%.
基金funded by the Ministry of Higher Education of Malaysia,grant number FRGS/1/2022/ICT02/UPSI/02/1.
文摘In recent years,feature selection(FS)optimization of high-dimensional gene expression data has become one of the most promising approaches for cancer prediction and classification.This work reviews FS and classification methods that utilize evolutionary algorithms(EAs)for gene expression profiles in cancer or medical applications based on research motivations,challenges,and recommendations.Relevant studies were retrieved from four major academic databases-IEEE,Scopus,Springer,and ScienceDirect-using the keywords‘cancer classification’,‘optimization’,‘FS’,and‘gene expression profile’.A total of 67 papers were finally selected with key advancements identified as follows:(1)The majority of papers(44.8%)focused on developing algorithms and models for FS and classification.(2)The second category encompassed studies on biomarker identification by EAs,including 20 papers(30%).(3)The third category comprised works that applied FS to cancer data for decision support system purposes,addressing high-dimensional data and the formulation of chromosome length.These studies accounted for 12%of the total number of studies.(4)The remaining three papers(4.5%)were reviews and surveys focusing on models and developments in prediction and classification optimization for cancer classification under current technical conditions.This review highlights the importance of optimizing FS in EAs to manage high-dimensional data effectively.Despite recent advancements,significant limitations remain:the dynamic formulation of chromosome length remains an underexplored area.Thus,further research is needed on dynamic-length chromosome techniques for more sophisticated biomarker gene selection techniques.The findings suggest that further advancements in dynamic chromosome length formulations and adaptive algorithms could enhance cancer classification accuracy and efficiency.
文摘Software defect prediction(SDP)aims to find a reliable method to predict defects in specific software projects and help software engineers allocate limited resources to release high-quality software products.Software defect prediction can be effectively performed using traditional features,but there are some redundant or irrelevant features in them(the presence or absence of this feature has little effect on the prediction results).These problems can be solved using feature selection.However,existing feature selection methods have shortcomings such as insignificant dimensionality reduction effect and low classification accuracy of the selected optimal feature subset.In order to reduce the impact of these shortcomings,this paper proposes a new feature selection method Cubic TraverseMa Beluga whale optimization algorithm(CTMBWO)based on the improved Beluga whale optimization algorithm(BWO).The goal of this study is to determine how well the CTMBWO can extract the features that are most important for correctly predicting software defects,improve the accuracy of fault prediction,reduce the number of the selected feature and mitigate the risk of overfitting,thereby achieving more efficient resource utilization and better distribution of test workload.The CTMBWO comprises three main stages:preprocessing the dataset,selecting relevant features,and evaluating the classification performance of the model.The novel feature selection method can effectively improve the performance of SDP.This study performs experiments on two software defect datasets(PROMISE,NASA)and shows the method’s classification performance using four detailed evaluation metrics,Accuracy,F1-score,MCC,AUC and Recall.The results indicate that the approach presented in this paper achieves outstanding classification performance on both datasets and has significant improvement over the baseline models.
基金Project supported by the National Natural Science Foundation of China (Grant No 10647105)
文摘Frequency selective surface (FSS) is a two-dimensional periodic structure which has promiaent characteristics of bandpass or bandbloek when interacting with electromagnetic waves. In this paper, the thickness, the dielectric constant, the element graph and the arrangement periodicity of an FSS medium are investigated by Genetic Algorithm (GA) when an electromagnetic wave is incident on the FSS at a wide angle, and an optimized FSS structure and transmission characteristics are obtained. The results show that the optimized structure has better stability in relation to incident angle of electromagnetic wave and preserves the stability of centre frequency even at an incident angle as large as 80°, thereby laying the foundation for the application of FSS to curved surfaces at wide angles.
基金Supported by the National Natural Science Foundation of China(60133010,70071042,60073043)
文摘Multi-objective Evolutionary Algorithm (MOEA) is becoming a hot research area and quite a few aspects of MOEAs have been studied and discussed. However there are still few literatures discussing the roles of search and selection operators in MOEAs. This paper studied their roles by solving a case of discrete Multi-objective Optimization Problem (MOP): Multi-objective TSP with a new MOEA. In the new MOEA, We adopt an efficient search operator, which has the properties of both crossover and mutation, to generate the new individuals and chose two selection operators: Family Competition and Population Competition with probabilities to realize selection. The simulation experiments showed that this new MOEA could get good uniform solutions representing the Pareto Front and outperformed SPEA in almost every simulation run on this problem. Furthermore, we analyzed its convergence property using finite Markov chain and proved that it could converge to Pareto Front with probability 1. We also find that the convergence property of MOEAs has much relationship with search and selection operators.
文摘In this paper,negative selection and genetic algorithms are combined and an improved bi-objective optimization scheme is presented to achieve optimized negative selection algorithm detectors.The main aim of the optimal detector generation technique is maximal nonself space coverage with reduced number of diversified detectors.Conventionally,researchers opted clonal selection based optimization methods to achieve the maximal nonself coverage milestone;however,detectors cloning process results in generation of redundant similar detectors and inefficient detector distribution in nonself space.In approach proposed in the present paper,the maximal nonself space coverage is associated with bi-objective optimization criteria including minimization of the detector overlap and maximization of the diversity factor of the detectors.In the proposed methodology,a novel diversity factorbased approach is presented to obtain diversified detector distribution in the nonself space.The concept of diversified detector distribution is studied for detector coverage with 2-dimensional pentagram and spiral self-patterns.Furthermore,the feasibility of the developed fault detection methodology is tested the fault detection of induction motor inner race and outer race bearings.
基金funded by Ministry of Higher Education(MoHE)Malaysia,under Transdisciplinary Research Grant Scheme(TRGS/1/2019/UKM/01/4/2).
文摘The Cross-domain Heuristic Search Challenge(CHeSC)is a competition focused on creating efficient search algorithms adaptable to diverse problem domains.Selection hyper-heuristics are a class of algorithms that dynamically choose heuristics during the search process.Numerous selection hyper-heuristics have different imple-mentation strategies.However,comparisons between them are lacking in the literature,and previous works have not highlighted the beneficial and detrimental implementation methods of different components.The question is how to effectively employ them to produce an efficient search heuristic.Furthermore,the algorithms that competed in the inaugural CHeSC have not been collectively reviewed.This work conducts a review analysis of the top twenty competitors from this competition to identify effective and ineffective strategies influencing algorithmic performance.A summary of the main characteristics and classification of the algorithms is presented.The analysis underlines efficient and inefficient methods in eight key components,including search points,search phases,heuristic selection,move acceptance,feedback,Tabu mechanism,restart mechanism,and low-level heuristic parameter control.This review analyzes the components referencing the competition’s final leaderboard and discusses future research directions for these components.The effective approaches,identified as having the highest quality index,are mixed search point,iterated search phases,relay hybridization selection,threshold acceptance,mixed learning,Tabu heuristics,stochastic restart,and dynamic parameters.Findings are also compared with recent trends in hyper-heuristics.This work enhances the understanding of selection hyper-heuristics,offering valuable insights for researchers and practitioners aiming to develop effective search algorithms for diverse problem domains.
文摘Multi-Objective Evolutionary Algorithms(MOEAs)have significantly advanced the domain of MultiObjective Optimization(MOO),facilitating solutions for complex problems with multiple conflicting objectives.This review explores the historical development of MOEAs,beginning with foundational concepts in multi-objective optimization,basic types of MOEAs,and the evolution of Pareto-based selection and niching methods.Further advancements,including decom-position-based approaches and hybrid algorithms,are discussed.Applications are analyzed in established domains such as engineering and economics,as well as in emerging fields like advanced analytics and machine learning.The significance of MOEAs in addressing real-world problems is emphasized,highlighting their role in facilitating informed decision-making.Finally,the development trajectory of MOEAs is compared with evolutionary processes,offering insights into their progress and future potential.
基金funded by the National Natural Science Foundation Project(Grant No.52274015)the National Science and Technology Major Project(Grant No.2025ZD1402205)。
文摘Casing damage resulting from sand production in unconsolidated sandstone reservoirs can significantly impact the average production of oil wells.However,the prediction task remains challenging due to the complex damage mechanism caused by sand production.This paper presents an innovative approach that combines feature selection(FS)with boosting algorithms to accurately predict casing damage in unconsolidated sandstone reservoirs.A novel TriScore FS technique is developed,combining mRMR,Random Forest,and F-test.The approach integrates three distinct feature selection approaches—TriScore,wrapper,and hybrid TriScore-wrapper and four interpretable Boosting models(AdaBoost,XGBoost,LightGBM,CatBoost).Moreover,shapley additive explanations(SHAP)was used to identify the most significant features across engineering,geological,and production features.The CatBoost model,using the Hybrid TriScore-rapper G_(1)G_(2)FS method,showed exceptional performance in analyzing data from the Gangxi Oilfield.It achieved the highestaccuracy(95.5%)and recall rate(89.7%)compared to other tested models.Casing service time,casing wall thickness,and perforation density were selected as the top three most important features.This framework enhances predictive robustness and is an effective tool for policymakers and energy analysts,confirming its capability to deliver reliable casing damage forecasts.
文摘In recent years, particle swarm optimization (PSO) has received widespread attention in feature selection due to its simplicity and potential for global search. However, in traditional PSO, particles primarily update based on two extreme values: personal best and global best, which limits the diversity of information. Ideally, particles should learn from multiple advantageous particles to enhance interactivity and optimization efficiency. Accordingly, this paper proposes a PSO that simulates the evolutionary dynamics of species survival in mountain peak ecology (PEPSO) for feature selection. Based on the pyramid topology, the algorithm simulates the features of mountain peak ecology in nature and the competitive-cooperative strategies among species. According to the principles of the algorithm, the population is first adaptively divided into many subgroups based on the fitness level of particles. Then, particles within each subgroup are divided into three different types based on their evolutionary levels, employing different adaptive inertia weight rules and dynamic learning mechanisms to define distinct learning modes. Consequently, all particles play their respective roles in promoting the global optimization performance of the algorithm, similar to different species in the ecological pattern of mountain peaks. Experimental validation of the PEPSO performance was conducted on 18 public datasets. The experimental results demonstrate that the PEPSO outperforms other PSO variant-based feature selection methods and mainstream feature selection methods based on intelligent optimization algorithms in terms of overall performance in global search capability, classification accuracy, and reduction of feature space dimensions. Wilcoxon signed-rank test also confirms the excellent performance of the PEPSO.
文摘This paper addresses the problem of selecting a route for every pair of communicating nodes in a virtual circuit data network in order to minimize the average delay encountered by messages. The problem was previously modeled as a network of M/M/1 queues. Agenetic algorithm to solve this problem is presented. Extensive computational results across a variety of networks are reported. These results indicate that the presented solution procedure outperforms the other methods in the literature and is effective for a wide range of traffic loads.
基金supported by the National Natural Science Foundation of China under Grant 61602162the Hubei Provincial Science and Technology Plan Project under Grant 2023BCB041.
文摘With the birth of Software-Defined Networking(SDN),integration of both SDN and traditional architectures becomes the development trend of computer networks.Network intrusion detection faces challenges in dealing with complex attacks in SDN environments,thus to address the network security issues from the viewpoint of Artificial Intelligence(AI),this paper introduces the Crayfish Optimization Algorithm(COA)to the field of intrusion detection for both SDN and traditional network architectures,and based on the characteristics of the original COA,an Improved Crayfish Optimization Algorithm(ICOA)is proposed by integrating strategies of elite reverse learning,Levy flight,crowding factor and parameter modification.The ICOA is then utilized for AI-integrated feature selection of intrusion detection for both SDN and traditional network architectures,to reduce the dimensionality of the data and improve the performance of network intrusion detection.Finally,the performance evaluation is performed by testing not only the NSL-KDD dataset and the UNSW-NB 15 dataset for traditional networks but also the InSDN dataset for SDN-based networks.Experimental results show that ICOA improves the accuracy by 0.532%and 2.928%respectively compared with GWO and COA in traditional networks.In SDN networks,the accuracy of ICOA is 0.25%and 0.3%higher than COA and PSO.These findings collectively indicate that AI-integrated feature selection based on the proposed ICOA can promote network intrusion detection for both SDN and traditional architectures.
基金Supported by the National Natural Science Foundation of China(10904080)
文摘In this paper, we analyze the features and distinctions of 6 classical algorithms: greedy algorithm (G), greedy evolution algorithm (GE), heuristics algorithm (H), greedy heuristic G (GRE), integer linear programming algorithm (ILP) and genetic algorithm (GA) to ensure the main influencing factors-the performance of algorithms and the running time of algorithms. What's more, we would not only present a research design that aims at gaining deeper understanding about the algorithm classification and its function as well as their distinction, but also make an empirical study in order to obtain a practical range standard that can guide the selection of reduction algorithms. When the size of a test object (product of test requirements and test cases) is smaller than 2000×2000, G algorithm is the commonly recommended algorithm. With the growth of test size, the usage of GE and GRE becomes more general.
基金supported by the National Natural Science Foundation of China (Nos. 12027810 and 11322548)
文摘The neutron supermirror is an important neutron optical device that can significantly improve the efficiency of neutron transport in neutron guides and has been widely used in research neutron sources.Three types of algorithms,including approximately ten algorithms,have been developed for designing high-efficiency supermirror structures.In addition to its applications in neutron guides,in recent years,the use of neutron supermirrors in neutronfocusing mirrors has been proposed to advance the development of neutron scattering and neutron imaging instruments,especially those at compact neutron sources.In this new application scenario,the performance of supermirrors strongly affects the instrument performance;therefore,a careful evaluation of the design algorithms is needed.In this study,we examine two issues:the effect of nonuniform film thickness distribution on a curved substrate and the effect of the specific neutron intensity distribution on the performance of neutron supermirrors designed using existing algorithms.The effect of film thickness nonuniformity is found to be relatively insignificant,whereas the effect of the neutron intensity distribution over Q(where Q is the magnitude of the scattering vector of incident neutrons)is considerable.Selection diagrams that show the best design algorithm under different conditions are obtained from these results.When the intensity distribution is not considered,empirical algorithms can obtain the highest average reflectivity,whereas discrete algorithms perform best when the intensity distribution is taken into account.The reasons for the differences in performance between algorithms are also discussed.These findings provide a reference for selecting design algorithms for supermirrors for use in neutron optical devices with unique geometries and can be very helpful for improving the performance of focusing supermirror-based instruments.
基金The work described in this paper was substantially supported by the grant from the Research Grants Council of the Hong Kong Special Administrative Region[CityU 11200218]one grant from the Health and Medical Research Fund,the Food and Health Bureau,The Government of the Hong Kong Special Administrative Region[07181426]+1 种基金and the funding from Hong Kong Institute for Data Science(HKIDS)at City University of Hong Kong.The work described in this paper was partially supported by two grants from City University of Hong Kong(CityU 11202219,CityU 11203520)This research was substantially sponsored by the research project(Grant No.32000464)supported by the National Natural Science Foundation of China and was substantially supported by the Shenzhen Research Institute,City University of Hong Kong.The authors extend their appreciation to the Deputyship for Research&Innovation,Ministry of Education in Saudi Arabia for funding this research with the project number(442/77).
文摘Bioactive compounds in plants,which can be synthesized using N-arylationmethods such as the Buchwald-Hartwig reaction,are essential in drug discovery for their pharmacological effects.Important descriptors are necessary for the estimation of yields in these reactions.This study explores ten metaheuristic algorithms for descriptor selection and model a voting ensemble for evaluation.The algorithms were evaluated based on computational time and the number of selected descriptors.Analyses show that robust performance is obtained with more descriptors,compared to cases where fewer descriptors are selected.The essential descriptor was deduced based on the frequency of occurrence within the 50 extracted data subsets,and better performance was achieved with the voting ensemble than other algorithms with RMSE of 6.4270 and R^(2) of 0.9423.The results and deductions from this study can be readily applied in the decision-making process of chemical synthesis by saving the computational cost associated with initial descriptor selection for yield estimation.The ensemble model has also shown robust performance in its yield estimation ability and efficiency.
基金Research Foundation of Education Bureau of Jilin Province(JJKN20190710KJ)Science and Technology Innovation Development Plan Project of Jilin city(20190302202).
文摘The recycling of glass bottles can reduce the consumption of resources and contribute to environmental protection.At present,the classification of recycled glass bottles is difficult due to the many differences in specifications and models.This paper proposes a classification algorithm for glass bottles that is divided into two stages,namely the extraction of candidate regions and the classification of classifiers.In the candidate region extraction stage,aiming at the problem of the large time overhead caused by the use of the SIFT(scale-invariant feature transform)descriptor in SS(selective search),an improved feature of HLSN(Haar-like based on SPP-Net)is proposed.An integral graph is introduced to accelerate the process of forming an HBSN vector,which overcomes the problem of repeated texture feature calculation in overlapping regions by SS.In the classification stage,the improved SS algorithm is used to extract target regions.The target regions are merged using a non-maximum suppression algorithm according to the classification scores of the respective regions,and the merged regions are classified using the trained classifier.Experiments demonstrate that,compared with the original SS,the improved SS algorithm increases the calculation speed by 13.8%,and its classification accuracy is 89.4%.Additionally,the classification algorithm for glass bottles has a certain resistance to noise.
文摘Feature selection is an active area in data mining research and development. It consists of efforts and contributions from a wide variety of communities, including statistics, machine learning, and pattern recognition. The diversity, on one hand, equips us with many methods and tools. On the other hand, the profusion of options causes confusion.This paper reviews various feature selection methods and identifies research challenges that are at the forefront of this exciting area.
文摘Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.