Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant chal...Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.展开更多
The Multilayer Perceptron(MLP)is a fundamental neural network model widely applied in various domains,particularly for lightweight image classification,speech recognition,and natural language processing tasks.Despite ...The Multilayer Perceptron(MLP)is a fundamental neural network model widely applied in various domains,particularly for lightweight image classification,speech recognition,and natural language processing tasks.Despite its widespread success,training MLPs often encounter significant challenges,including susceptibility to local optima,slow convergence rates,and high sensitivity to initial weight configurations.To address these issues,this paper proposes a Latin Hypercube Opposition-based Elite Variation Artificial Protozoa Optimizer(LOEV-APO),which enhances both global exploration and local exploitation simultaneously.LOEV-APO introduces a hybrid initialization strategy that combines Latin Hypercube Sampling(LHS)with Opposition-Based Learning(OBL),thus improving the diversity and coverage of the initial population.Moreover,an Elite Protozoa Variation Strategy(EPVS)is incorporated,which applies differential mutation operations to elite candidates,accelerating convergence and strengthening local search capabilities around high-quality solutions.Extensive experiments are conducted on six classification tasks and four function approximation tasks,covering a wide range of problem complexities and demonstrating superior generalization performance.The results demonstrate that LOEV-APO consistently outperforms nine state-of-the-art metaheuristic algorithms and two gradient-based methods in terms of convergence speed,solution accuracy,and robustness.These findings suggest that LOEV-APO serves as a promising optimization tool for MLP training and provides a viable alternative to traditional gradient-based methods.展开更多
文摘Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics.
基金supported by the National Natural Science Foundation of China(Grant Nos.62376089,62302153,62302154)the Key Research and Development Program of Hubei Province,China(Grant No.2023BEB024)+1 种基金the Young and Middle-Aged Scientific and Technological Innovation Team Plan in Higher Education Institutions in Hubei Province,China(Grant No.T2023007)the National Natural Science Foundation of China(Grant No.U23A20318).
文摘The Multilayer Perceptron(MLP)is a fundamental neural network model widely applied in various domains,particularly for lightweight image classification,speech recognition,and natural language processing tasks.Despite its widespread success,training MLPs often encounter significant challenges,including susceptibility to local optima,slow convergence rates,and high sensitivity to initial weight configurations.To address these issues,this paper proposes a Latin Hypercube Opposition-based Elite Variation Artificial Protozoa Optimizer(LOEV-APO),which enhances both global exploration and local exploitation simultaneously.LOEV-APO introduces a hybrid initialization strategy that combines Latin Hypercube Sampling(LHS)with Opposition-Based Learning(OBL),thus improving the diversity and coverage of the initial population.Moreover,an Elite Protozoa Variation Strategy(EPVS)is incorporated,which applies differential mutation operations to elite candidates,accelerating convergence and strengthening local search capabilities around high-quality solutions.Extensive experiments are conducted on six classification tasks and four function approximation tasks,covering a wide range of problem complexities and demonstrating superior generalization performance.The results demonstrate that LOEV-APO consistently outperforms nine state-of-the-art metaheuristic algorithms and two gradient-based methods in terms of convergence speed,solution accuracy,and robustness.These findings suggest that LOEV-APO serves as a promising optimization tool for MLP training and provides a viable alternative to traditional gradient-based methods.