Autonomous connected vehicles(ACV)involve advanced control strategies to effectively balance safety,efficiency,energy consumption,and passenger comfort.This research introduces a deep reinforcement learning(DRL)-based...Autonomous connected vehicles(ACV)involve advanced control strategies to effectively balance safety,efficiency,energy consumption,and passenger comfort.This research introduces a deep reinforcement learning(DRL)-based car-following(CF)framework employing the Deep Deterministic Policy Gradient(DDPG)algorithm,which integrates a multi-objective reward function that balances the four goals while maintaining safe policy learning.Utilizing real-world driving data from the highD dataset,the proposed model learns adaptive speed control policies suitable for dynamic traffic scenarios.The performance of the DRL-based model is evaluated against a traditional model predictive control-adaptive cruise control(MPC-ACC)controller.Results show that theDRLmodel significantly enhances safety,achieving zero collisions and a higher average time-to-collision(TTC)of 8.45 s,compared to 5.67 s for MPC and 6.12 s for human drivers.For efficiency,the model demonstrates 89.2% headway compliance and maintains speed tracking errors below 1.2 m/s in 90% of cases.In terms of energy optimization,the proposed approach reduces fuel consumption by 5.4% relative to MPC.Additionally,it enhances passenger comfort by lowering jerk values by 65%,achieving 0.12 m/s3 vs.0.34 m/s3 for human drivers.A multi-objective reward function is integrated to ensure stable policy convergence while simultaneously balancing the four key performance metrics.Moreover,the findings underscore the potential of DRL in advancing autonomous vehicle control,offering a robust and sustainable solution for safer,more efficient,and more comfortable transportation systems.展开更多
Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.Howev...Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.展开更多
Deployable Composite Thin-Walled Structures(DCTWS)are widely used in space applications due to their ability to compactly fold and self-deploy in orbit,enabled by cutouts.Cutout design is crucial for balancing structu...Deployable Composite Thin-Walled Structures(DCTWS)are widely used in space applications due to their ability to compactly fold and self-deploy in orbit,enabled by cutouts.Cutout design is crucial for balancing structural rigidity and flexibility,ensuring material integrity during large deformations,and providing adequate load-bearing capacity and stability once deployed.Most research has focused on optimizing cutout size and shape,while topology optimization offers a broader design space.However,the anisotropic properties of woven composite laminates,complex failure criteria,and multi-performance optimization needs have limited the exploration of topology optimization in this field.This work derives the sensitivities of bending stiffness,critical buckling load,and the failure index of woven composite materials with respect to element density,and formulates both single-objective and multi-objective topology optimization models using a linear weighted aggregation approach.The developed method was integrated with the commercial finite element software ABAQUS via a Python script,allowing efficient application to cutout design in various DCTWS configurations to maximize bending stiffness and critical buckling load under material failure constraints.Optimization of a classical tubular hinge resulted in improvements of 107.7%in bending stiffness and 420.5%in critical buckling load compared to level-set topology optimization results reported in the literature,validating the effectiveness of the approach.To facilitate future research and encourage the broader adoption of topology optimization techniques in DCTWS design,the source code for this work is made publicly available via a Git Hub link:https://github.com/jinhao-ok1/Topo-for-DCTWS.git.展开更多
In a wide range of engineering applications,complex constrained multi-objective optimization problems(CMOPs)present significant challenges,as the complexity of constraints often hampers algorithmic convergence and red...In a wide range of engineering applications,complex constrained multi-objective optimization problems(CMOPs)present significant challenges,as the complexity of constraints often hampers algorithmic convergence and reduces population diversity.To address these challenges,we propose a novel algorithm named Constraint IntensityDriven Evolutionary Multitasking(CIDEMT),which employs a two-stage,tri-task framework to dynamically integrates problem structure and knowledge transfer.In the first stage,three cooperative tasks are designed to explore the Constrained Pareto Front(CPF),the Unconstrained Pareto Front(UPF),and theε-relaxed constraint boundary,respectively.A CPF-UPF relationship classifier is employed to construct a problem-type-aware evolutionary strategy pool.At the end of the first stage,each task selects strategies from this strategy pool based on the specific type of problem,thereby guiding the subsequent evolutionary process.In the second stage,while each task continues to evolve,aτ-driven knowledge transfer mechanism is introduced to selectively incorporate effective solutions across tasks.enhancing the convergence and feasibility of the main task.Extensive experiments conducted on 32 benchmark problems from three test suites(LIRCMOP,DASCMOP,and DOC)demonstrate that CIDEMT achieves the best Inverted Generational Distance(IGD)values on 24 problems and the best Hypervolume values(HV)on 22 problems.Furthermore,CIDEMT significantly outperforms six state-of-the-art constrained multi-objective evolutionary algorithms(CMOEAs).These results confirm CIDEMT’s superiority in promoting convergence,diversity,and robustness in solving complex CMOPs.展开更多
Community detection is one of the most fundamental applications in understanding the structure of complicated networks.Furthermore,it is an important approach to identifying closely linked clusters of nodes that may r...Community detection is one of the most fundamental applications in understanding the structure of complicated networks.Furthermore,it is an important approach to identifying closely linked clusters of nodes that may represent underlying patterns and relationships.Networking structures are highly sensitive in social networks,requiring advanced techniques to accurately identify the structure of these communities.Most conventional algorithms for detecting communities perform inadequately with complicated networks.In addition,they miss out on accurately identifying clusters.Since single-objective optimization cannot always generate accurate and comprehensive results,as multi-objective optimization can.Therefore,we utilized two objective functions that enable strong connections between communities and weak connections between them.In this study,we utilized the intra function,which has proven effective in state-of-the-art research studies.We proposed a new inter-function that has demonstrated its effectiveness by making the objective of detecting external connections between communities is to make them more distinct and sparse.Furthermore,we proposed a Multi-Objective community strength enhancement algorithm(MOCSE).The proposed algorithm is based on the framework of the Multi-Objective Evolutionary Algorithm with Decomposition(MOEA/D),integrated with a new heuristic mutation strategy,community strength enhancement(CSE).The results demonstrate that the model is effective in accurately identifying community structures while also being computationally efficient.The performance measures used to evaluate the MOEA/D algorithm in our work are normalized mutual information(NMI)and modularity(Q).It was tested using five state-of-the-art algorithms on social networks,comprising real datasets(Zachary,Dolphin,Football,Krebs,SFI,Jazz,and Netscience),as well as twenty synthetic datasets.These results provide the robustness and practical value of the proposed algorithm in multi-objective community identification.展开更多
Federated learning is a distributed framework that trains a centralised model using data from multiple clients without transferring that data to a central server.Despite rapid progress,federated learning still faces s...Federated learning is a distributed framework that trains a centralised model using data from multiple clients without transferring that data to a central server.Despite rapid progress,federated learning still faces several unsolved challenges.Specifically,communication costs and system heterogeneity,such as nonidentical data distribution,hinder federated learning's progress.Several approaches have recently emerged for federated learning involving heterogeneous clients with varying computational capabilities(namely,heterogeneous federated learning).However,heterogeneous federated learning faces two key challenges:optimising model size and determining client selection ratios.Moreover,efficiently aggregating local models from clients with diverse capabilities is crucial for addressing system heterogeneity and communication efficiency.This paper proposes an evolutionary multiobjective optimisation framework for heterogeneous federated learning(MOHFL)to address these issues.Our approach elegantly formulates and solves a biobjective optimisation problem that minimises communication cost and model error rate.The decision variables in this framework comprise model sizes and client selection ratios for each Q client cluster,yielding a total of 2×Q optimisation parameters to be tuned.We develop a partition-based strategy for MOHFL that segregates clients into clusters based on their communication and computation capabilities.Additionally,we implement an adaptive model sizing mechanism that dynamically assigns appropriate subnetwork architectures to clients based on their computational constraints.We also propose a unified aggregation framework to combine models of varying sizes from heterogeneous clients effectively.Extensive experiments on multiple datasets demonstrate the effectiveness and superiority of our proposed method compared to existing approaches.展开更多
As large-scale astronomical surveys,such as the Sloan Digital Sky Survey(SDSS)and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope(LAMOST),generate increasingly complex datasets,clustering algorithms have...As large-scale astronomical surveys,such as the Sloan Digital Sky Survey(SDSS)and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope(LAMOST),generate increasingly complex datasets,clustering algorithms have become vital for identifying patterns and classifying celestial objects.This paper systematically investigates the application of five main categories of clustering techniques-partition-based,density-based,model-based,hierarchical,and“others”-across a range of astronomical research over the past decade.This review focuses on the six key application areas of stellar classification,galaxy structure analysis,detection of galactic and interstellar features,highenergy astrophysics,exoplanet studies,and anomaly detection.This paper provides an in-depth analysis of the performance and results of each method,considering their respective suitabilities for different data types.Additionally,it presents clustering algorithm selection strategies based on the characteristics of the spectroscopic data being analyzed.We highlight challenges such as handling large datasets,the need for more efficient computational tools,and the lack of labeled data.We also underscore the potential of unsupervised and semi-supervised clustering approaches to overcome these challenges,offering insight into their practical applications,performance,and results in astronomical research.展开更多
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain...Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.展开更多
Spaceborne antennas are essential for remote sensing,deep-space communication,and Earth observation,yet their trajectory planning is complicated by nonlinear base-manipulator coupling and antenna flexibility.To addres...Spaceborne antennas are essential for remote sensing,deep-space communication,and Earth observation,yet their trajectory planning is complicated by nonlinear base-manipulator coupling and antenna flexibility.To address these challenges,this paper proposes a multi-objective trajectory optimization framework.The system dynamics capture both nonlinear rigid-flexible coupling and antenna deformation through a reduced-order formulation.To enhance discretization efficiency,a predictive-terminal hp-adaptive pseudospectral method is employed,assigning collocation density based on task-phase characteristics:finer resolution is applied to dynamic segments requiring higher accuracy,especially near the terminal phase.This enables efficient transcription of the continuous-time problem into a Nonlinear Programming Problem(NLP).The resulting NLP is then solved using a multi-objective optimization strategy based on the nondominated sorting genetic algorithm II,which explores trade-offs among antenna pointing accuracy,energy consumption,and structural vibration.Numerical results demonstrate that the proposed method achieves a reduction of approximately 14.0% in control energy and 41.8%in peak actuation compared to a GPOPS-II baseline,while significantly enhancing vibration suppression.The resulting Pareto front reveals structured trade-offs and clustered solutions,offering robust and diverse options for precision,low-disturbance mission planning.展开更多
Multichannel signals have the characteristics of information diversity and information consistency.To better explore and utilize the affinity relationship within multichannel signals,a new graph learning technique bas...Multichannel signals have the characteristics of information diversity and information consistency.To better explore and utilize the affinity relationship within multichannel signals,a new graph learning technique based on low rank tensor approximation is proposed for multichannel monitoring signal processing and utilization.Firstly,the affinity relationship of multichannel signals can be acquired based on the clustering results of each channel signal.Wherein an affinity tensor is constructed to integrate the diverse and consistent information of the clustering information among multichannel signals.Secondly,a low-rank tensor optimization model is built and the joint affinity matrix is optimized with the assistance of the strong confidence affinity matrix.Through solving the optimization model,the fused affinity relationship graph of multichannel signals can be obtained.Finally,the multichannel fused clustering results can be acquired though the updated joint affinity relationship graph.The multichannel signal utilization examples in health state assessment with public datasets and microwave detection with actual echoes verify the advantages and effectiveness of the proposed method.展开更多
With the popularization of smart devices,Location-Based Services(LBS)greatly facilitates users’life,but at the same time brings the risk of users’location privacy leakage.Existing location privacy protection methods...With the popularization of smart devices,Location-Based Services(LBS)greatly facilitates users’life,but at the same time brings the risk of users’location privacy leakage.Existing location privacy protection methods are deficient,failing to reasonably allocate the privacy budget for non-outlier location points and ignoring the critical location information that may be contained in the outlier points,leading to decreased data availability and privacy exposure problems.To address these problems,this paper proposes a Mix Location Privacy Preservation Method Based on Differential Privacy with Clustering(MLDP).The method first utilizes the DBSCAN clustering algorithm to classify location points into non-outliers and outliers.For non-outliers,the scoring function is designed by combining geographic information and semantic information,and the privacy budget is allocated according to the heat intensity of the hotspot area;for outliers,the scoring function is constructed to allocate the privacy budget based on their correlation with the hotspot area.By comprehensively considering the geographic information,semantic information,and correlation with hotspot areas of the location points,a reasonable privacy budget is assigned to each location point,andfinallynoise is added throughthe Laplacemechanismto realizeprivacyprotection.Experimental results on tworeal trajectory datasets,Geolife and T-Drive,show that the MLDP approach significantly improves data availability while effectively protecting location privacy.Compared with the comparison methods,the maximum available data ratio of MLDP is 1.Moreover,compared with the RandomNoise method,its execution time is 0.056–0.061 s longer,and the logRE is 0.12951–0.62194 lower;compared with KemeansDP,QTK-DP,DPK-F,IDP-SC,and DPK-Means-up methods,it saves 0.114–0.296 s in execution time,and the logRE is 0.01112–0.38283 lower.展开更多
With the rapid development of the aviation industry,air travel has become one of the most important modes.Improving the service quality of civil aviation airports is crucial to their competitiveness.This study intends...With the rapid development of the aviation industry,air travel has become one of the most important modes.Improving the service quality of civil aviation airports is crucial to their competitiveness.This study intends to develop a scientific and rational evaluation methodology and framework for assessing service quality in civil aviation airports,thereby providing a theoretical foundation and practical guidance for enhancing service standards in the aviation industry.First,the study constructs a CRITIC-bidirectional grey possibility clustering model,which uses the CRITIC method to determine the weights of indicators and integrates the forward grey possibility clustering model and the inverse grey possibility clustering model to determine possibility functions from two perspectives.Second,a service quality evaluation index system for civil airports is constructed from four dimensions,and the weights of each index within the system are subsequently calculated.Finally,the constructed model is applied to evaluate the service quality of nine domestic civil airports.Based on the clustering results,targeted countermeasures and suggestions are proposed.Empirical results demonstrate that,compared to the traditional grey possibility clustering model,the proposed model balances the objectivity of indicator weighting,the objectivity of possibility function construction,and the simplicity of the computational process,thereby possessing significant theoretical and practical implications.展开更多
This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the compl...This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the complexities,simulation time cost and convergence problems of detailed PV power station models.First,the amplitude–frequency curves of different filter parameters are analyzed.Based on the results,a grouping parameter set for characterizing the external filter characteristics is established.These parameters are further defined as clustering parameters.A single PV inverter model is then established as a prerequisite foundation.The proposed equivalent method combines the global search capability of PSO with the rapid convergence of KMC,effectively overcoming the tendency of KMC to become trapped in local optima.This approach enhances both clustering accuracy and numerical stability when determining equivalence for PV inverter units.Using the proposed clustering method,both a detailed PV power station model and an equivalent model are developed and compared.Simulation and hardwarein-loop(HIL)results based on the equivalent model verify that the equivalent method accurately represents the dynamic characteristics of PVpower stations and adapts well to different operating conditions.The proposed equivalent modeling method provides an effective analysis tool for future renewable energy integration research.展开更多
Rapid urbanization in China has led to spatial antagonism between urban development and farmland protection and ecological security maintenance.Multi-objective spatial collaborative optimization is a powerful method f...Rapid urbanization in China has led to spatial antagonism between urban development and farmland protection and ecological security maintenance.Multi-objective spatial collaborative optimization is a powerful method for achieving sustainable regional development.Previous studies on multi-objective spatial optimization do not involve spatial corrections to simulation results based on the natural endowment of space resources.This study proposes an Ecological Security-Food Security-Urban Sustainable Development(ES-FS-USD)spatial optimization framework.This framework combines the non-dominated sorting genetic algorithm II(NSGA-II)and patch-generating land use simulation(PLUS)model with an ecological protection importance evaluation,comprehensive agricultural productivity evaluation,and urban sustainable development potential assessment and optimizes the territorial space in the Yangtze River Delta(YRD)region in 2035.The proposed sustainable development(SD)scenario can effectively reduce the destruction of landscape patterns of various land-use types while considering both ecological and economic benefits.The simulation results were further revised by evaluating the land-use suitability of the YRD region.According to the revised spatial pattern for the YRD in 2035,the farmland area accounts for 43.59%of the total YRD,which is 5.35%less than that in 2010.Forest,grassland,and water area account for 40.46%of the total YRD—an increase of 1.42%compared with the case in 2010.Construction land accounts for 14.72%of the total YRD—an increase of 2.77%compared with the case in 2010.The ES-FS-USD spatial optimization framework ensures that spatial optimization outcomes are aligned with the natural endowments of land resources,thereby promoting the sustainable use of land resources,improving the ability of spatial management,and providing valuable insights for decision makers.展开更多
Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rel...Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping.展开更多
This paper introduces a fuzzy C-means-based pooling layer for convolutional neural networks that explicitly models local uncertainty and ambiguity.Conventional pooling operations,such as max and average,apply rigid ag...This paper introduces a fuzzy C-means-based pooling layer for convolutional neural networks that explicitly models local uncertainty and ambiguity.Conventional pooling operations,such as max and average,apply rigid aggregation and often discard fine-grained boundary information.In contrast,our method computes soft membershipswithin each receptive field and aggregates cluster-wise responses throughmembership-weighted pooling,thereby preserving informative structure while reducing dimensionality.Being differentiable,the proposed layer operates as standard two-dimensional pooling.We evaluate our approach across various CNN backbones and open datasets,including CIFAR-10/100,STL-10,LFW,and ImageNette,and further probe small training set restrictions on MNIST and Fashion-MNIST.In these settings,the proposed pooling consistently improves accuracy and weighted F1 over conventional baselines,with particularly strong gains when training data are scarce.Even with less than 1%of the training set,ourmethodmaintains reliable performance,indicating improved sample efficiency and robustness to noisy or ambiguous local patterns.Overall,integrating soft memberships into the pooling operator provides a practical and generalizable inductive bias that enhances robustness and generalization in modern CNN pipelines.展开更多
The Intrusion Detection System(IDS)is a security mechanism developed to observe network traffic and recognize suspicious or malicious activities.Clustering algorithms are often incorporated into IDS;however,convention...The Intrusion Detection System(IDS)is a security mechanism developed to observe network traffic and recognize suspicious or malicious activities.Clustering algorithms are often incorporated into IDS;however,conventional clustering-based methods face notable drawbacks,including poor scalability in handling high-dimensional datasets and a strong dependence of outcomes on initial conditions.To overcome the performance limitations of existing methods,this study proposes a novel quantum-inspired clustering algorithm that relies on a similarity coefficient-based quantum genetic algorithm(SC-QGA)and an improved quantum artificial bee colony algorithm hybrid K-means(IQABC-K).First,the SC-QGA algorithmis constructed based on quantum computing and integrates similarity coefficient theory to strengthen genetic diversity and feature extraction capabilities.For the subsequent clustering phase,the process based on the IQABC-K algorithm is enhanced with the core improvement of adaptive rotation gate and movement exploitation strategies to balance the exploration capabilities of global search and the exploitation capabilities of local search.Simultaneously,the acceleration of convergence toward the global optimum and a reduction in computational complexity are facilitated by means of the global optimum bootstrap strategy and a linear population reduction strategy.Through experimental evaluation with multiple algorithms and diverse performance metrics,the proposed algorithm confirms reliable accuracy on three datasets:KDD CUP99,NSL_KDD,and UNSW_NB15,achieving accuracy of 98.57%,98.81%,and 98.32%,respectively.These results affirm its potential as an effective solution for practical clustering applications.展开更多
Deformation prediction for extra-high arch dams is highly important for ensuring their safe operation.To address the challenges of complex monitoring data,the uneven spatial distribution of deformation,and the constru...Deformation prediction for extra-high arch dams is highly important for ensuring their safe operation.To address the challenges of complex monitoring data,the uneven spatial distribution of deformation,and the construction and optimization of a prediction model for deformation prediction,a multipoint ultrahigh arch dam deformation prediction model,namely,the CEEMDAN-KPCA-GSWOA-KELM,which is based on a clustering partition,is pro-posed.First,the monitoring data are preprocessed via variational mode decomposition(VMD)and wavelet denoising(WT),which effectively filters out noise and improves the signal-to-noise ratio of the data,providing high-quality input data for subsequent prediction models.Second,scientific cluster partitioning is performed via the K-means++algorithm to precisely capture the spatial distribution characteristics of extra-high arch dams and ensure the consistency of deformation trends at measurement points within each partition.Finally,CEEMDAN is used to separate monitoring data,predict and analyze each component,combine the KPCA(Kernel Principal Component Analysis)and the KELM(Kernel Extreme Learning Machine)optimized by the GSWOA(Global Search Whale Optimization Algorithm),integrate the predictions of each component via reconstruction methods,and precisely predict the overall trend of ultrahigh arch dam deformation.An extra high arch dam project is taken as an example and validated via a comparative analysis of multiple models.The results show that the multipoint deformation prediction model in this paper can combine data from different measurement points,achieve a comprehensive,precise prediction of the deformation situation of extra high arch dams,and provide strong technical support for safe operation.展开更多
The rapid growth of mobile and Internet of Things(IoT)applications in dense urban environments places stringent demands on future Beyond 5G(B5G)or Beyond 6G(B6G)networks,which must ensure high Quality of Service(QoS)w...The rapid growth of mobile and Internet of Things(IoT)applications in dense urban environments places stringent demands on future Beyond 5G(B5G)or Beyond 6G(B6G)networks,which must ensure high Quality of Service(QoS)while maintaining cost-efficiency and sustainable deployment.Traditional strategies struggle with complex 3D propagation,building penetration loss,and the balance between coverage and infrastructure cost.To address this challenge,this study presents the first application of a Global-best Guided Quantum-inspired Tabu Search with Quantum-Not Gate(GQTS-QNG)framework for 3D base-station deployment optimization.The problem is formulated as a multi-objective model that simultaneously maximizes coverage and minimizes deployment cost.A binary-to-decimal encodingmechanism is designed to represent discrete placement coordinates and base station types,leveraging a quantum-inspired method to efficiently search and refine solutions within challenging combinatorial environments.Global-best guidance and tabu memory are integrated to strengthen convergence stability and avoid revisiting previously explored solutions.Simulation results across user densities ranging from 1000 to 10,000 show that GQTS-QNG consistently finds deployment configurations achieving full coverage while reducing deployment cost compared with the state-of-the-art algorithms under equal iteration times.Additionally,our method generates welldistributed and structured Pareto fronts,offering diverse planning options that allow operators to flexibly balance cost and performance requirements.These findings demonstrate that GQTS-QNG is a scalable and efficient algorithm for sustainable 3D cellular network deployment in B5G/6G urban scenarios.展开更多
AIM:To evaluate long-term visual field(VF)prediction using K-means clustering in patients with primary open angle glaucoma(POAG).METHODS:Patients who underwent 24-2 VF tests≥10 were included in this study.Using 52 to...AIM:To evaluate long-term visual field(VF)prediction using K-means clustering in patients with primary open angle glaucoma(POAG).METHODS:Patients who underwent 24-2 VF tests≥10 were included in this study.Using 52 total deviation values(TDVs)from the first 10 VF tests of the training dataset,VF points were clustered into several regions using the hierarchical ordered partitioning and collapsing hybrid(HOPACH)and K-means clustering.Based on the clustering results,a linear regression analysis was applied to each clustered region of the testing dataset to predict the TDVs of the 10th VF test.Three to nine VF tests were used to predict the 10th VF test,and the prediction errors(root mean square error,RMSE)of each clustering method and pointwise linear regression(PLR)were compared.RESULTS:The training group consisted of 228 patients(mean age,54.20±14.38y;123 males and 105 females),and the testing group included 81 patients(mean age,54.88±15.22y;43 males and 38 females).All subjects were diagnosed with POAG.Fifty-two VF points were clustered into 11 and nine regions using HOPACH and K-means clustering,respectively.K-means clustering had a lower prediction error than PLR when n=1:3 and 1:4(both P≤0.003).The prediction errors of K-means clustering were lower than those of HOPACH in all sections(n=1:4 to 1:9;all P≤0.011),except for n=1:3(P=0.680).PLR outperformed K-means clustering only when n=1:8 and 1:9(both P≤0.020).CONCLUSION:K-means clustering can predict longterm VF test results more accurately in patients with POAG with limited VF data.展开更多
基金the Hebei Province Science and Technology Plan Project(19221909D)rincess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2025R308),Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.
文摘Autonomous connected vehicles(ACV)involve advanced control strategies to effectively balance safety,efficiency,energy consumption,and passenger comfort.This research introduces a deep reinforcement learning(DRL)-based car-following(CF)framework employing the Deep Deterministic Policy Gradient(DDPG)algorithm,which integrates a multi-objective reward function that balances the four goals while maintaining safe policy learning.Utilizing real-world driving data from the highD dataset,the proposed model learns adaptive speed control policies suitable for dynamic traffic scenarios.The performance of the DRL-based model is evaluated against a traditional model predictive control-adaptive cruise control(MPC-ACC)controller.Results show that theDRLmodel significantly enhances safety,achieving zero collisions and a higher average time-to-collision(TTC)of 8.45 s,compared to 5.67 s for MPC and 6.12 s for human drivers.For efficiency,the model demonstrates 89.2% headway compliance and maintains speed tracking errors below 1.2 m/s in 90% of cases.In terms of energy optimization,the proposed approach reduces fuel consumption by 5.4% relative to MPC.Additionally,it enhances passenger comfort by lowering jerk values by 65%,achieving 0.12 m/s3 vs.0.34 m/s3 for human drivers.A multi-objective reward function is integrated to ensure stable policy convergence while simultaneously balancing the four key performance metrics.Moreover,the findings underscore the potential of DRL in advancing autonomous vehicle control,offering a robust and sustainable solution for safer,more efficient,and more comfortable transportation systems.
文摘Task scheduling in cloud computing is a multi-objective optimization problem,often involving conflicting objectives such as minimizing execution time,reducing operational cost,and maximizing resource utilization.However,traditional approaches frequently rely on single-objective optimization methods which are insufficient for capturing the complexity of such problems.To address this limitation,we introduce MDMOSA(Multi-objective Dwarf Mongoose Optimization with Simulated Annealing),a hybrid that integrates multi-objective optimization for efficient task scheduling in Infrastructure-as-a-Service(IaaS)cloud environments.MDMOSA harmonizes the exploration capabilities of the biologically inspired Dwarf Mongoose Optimization(DMO)with the exploitation strengths of Simulated Annealing(SA),achieving a balanced search process.The algorithm aims to optimize task allocation by reducing makespan and financial cost while improving system resource utilization.We evaluate MDMOSA through extensive simulations using the real-world Google Cloud Jobs(GoCJ)dataset within the CloudSim environment.Comparative analysis against benchmarked algorithms such as SMOACO,MOTSGWO,and MFPAGWO reveals that MDMOSA consistently achieves superior performance in terms of scheduling efficiency,cost-effectiveness,and scalability.These results confirm the potential of MDMOSA as a robust and adaptable solution for resource scheduling in dynamic and heterogeneous cloud computing infrastructures.
基金supported by the National Natural Science Foundation of China(No.12202295)the International(Regional)Cooperation and Exchange Projects of the National Natural Science Foundation of China(No.W2421002)+2 种基金the Sichuan Science and Technology Program(No.2025ZNSFSC0845)Zhejiang Provincial Natural Science Foundation of China(No.ZCLZ24A0201)the Fundamental Research Funds for the Provincial Universities of Zhejiang(No.GK249909299001-004)。
文摘Deployable Composite Thin-Walled Structures(DCTWS)are widely used in space applications due to their ability to compactly fold and self-deploy in orbit,enabled by cutouts.Cutout design is crucial for balancing structural rigidity and flexibility,ensuring material integrity during large deformations,and providing adequate load-bearing capacity and stability once deployed.Most research has focused on optimizing cutout size and shape,while topology optimization offers a broader design space.However,the anisotropic properties of woven composite laminates,complex failure criteria,and multi-performance optimization needs have limited the exploration of topology optimization in this field.This work derives the sensitivities of bending stiffness,critical buckling load,and the failure index of woven composite materials with respect to element density,and formulates both single-objective and multi-objective topology optimization models using a linear weighted aggregation approach.The developed method was integrated with the commercial finite element software ABAQUS via a Python script,allowing efficient application to cutout design in various DCTWS configurations to maximize bending stiffness and critical buckling load under material failure constraints.Optimization of a classical tubular hinge resulted in improvements of 107.7%in bending stiffness and 420.5%in critical buckling load compared to level-set topology optimization results reported in the literature,validating the effectiveness of the approach.To facilitate future research and encourage the broader adoption of topology optimization techniques in DCTWS design,the source code for this work is made publicly available via a Git Hub link:https://github.com/jinhao-ok1/Topo-for-DCTWS.git.
基金supported by the National Natural Science Foundation of China under Grant No.61972040the Science and Technology Research and Development Project funded by China Railway Material Trade Group Luban Company.
文摘In a wide range of engineering applications,complex constrained multi-objective optimization problems(CMOPs)present significant challenges,as the complexity of constraints often hampers algorithmic convergence and reduces population diversity.To address these challenges,we propose a novel algorithm named Constraint IntensityDriven Evolutionary Multitasking(CIDEMT),which employs a two-stage,tri-task framework to dynamically integrates problem structure and knowledge transfer.In the first stage,three cooperative tasks are designed to explore the Constrained Pareto Front(CPF),the Unconstrained Pareto Front(UPF),and theε-relaxed constraint boundary,respectively.A CPF-UPF relationship classifier is employed to construct a problem-type-aware evolutionary strategy pool.At the end of the first stage,each task selects strategies from this strategy pool based on the specific type of problem,thereby guiding the subsequent evolutionary process.In the second stage,while each task continues to evolve,aτ-driven knowledge transfer mechanism is introduced to selectively incorporate effective solutions across tasks.enhancing the convergence and feasibility of the main task.Extensive experiments conducted on 32 benchmark problems from three test suites(LIRCMOP,DASCMOP,and DOC)demonstrate that CIDEMT achieves the best Inverted Generational Distance(IGD)values on 24 problems and the best Hypervolume values(HV)on 22 problems.Furthermore,CIDEMT significantly outperforms six state-of-the-art constrained multi-objective evolutionary algorithms(CMOEAs).These results confirm CIDEMT’s superiority in promoting convergence,diversity,and robustness in solving complex CMOPs.
文摘Community detection is one of the most fundamental applications in understanding the structure of complicated networks.Furthermore,it is an important approach to identifying closely linked clusters of nodes that may represent underlying patterns and relationships.Networking structures are highly sensitive in social networks,requiring advanced techniques to accurately identify the structure of these communities.Most conventional algorithms for detecting communities perform inadequately with complicated networks.In addition,they miss out on accurately identifying clusters.Since single-objective optimization cannot always generate accurate and comprehensive results,as multi-objective optimization can.Therefore,we utilized two objective functions that enable strong connections between communities and weak connections between them.In this study,we utilized the intra function,which has proven effective in state-of-the-art research studies.We proposed a new inter-function that has demonstrated its effectiveness by making the objective of detecting external connections between communities is to make them more distinct and sparse.Furthermore,we proposed a Multi-Objective community strength enhancement algorithm(MOCSE).The proposed algorithm is based on the framework of the Multi-Objective Evolutionary Algorithm with Decomposition(MOEA/D),integrated with a new heuristic mutation strategy,community strength enhancement(CSE).The results demonstrate that the model is effective in accurately identifying community structures while also being computationally efficient.The performance measures used to evaluate the MOEA/D algorithm in our work are normalized mutual information(NMI)and modularity(Q).It was tested using five state-of-the-art algorithms on social networks,comprising real datasets(Zachary,Dolphin,Football,Krebs,SFI,Jazz,and Netscience),as well as twenty synthetic datasets.These results provide the robustness and practical value of the proposed algorithm in multi-objective community identification.
基金supported by the National Research Foundation of Korea grant funded by the Korea government(RS-2023-00217116)。
文摘Federated learning is a distributed framework that trains a centralised model using data from multiple clients without transferring that data to a central server.Despite rapid progress,federated learning still faces several unsolved challenges.Specifically,communication costs and system heterogeneity,such as nonidentical data distribution,hinder federated learning's progress.Several approaches have recently emerged for federated learning involving heterogeneous clients with varying computational capabilities(namely,heterogeneous federated learning).However,heterogeneous federated learning faces two key challenges:optimising model size and determining client selection ratios.Moreover,efficiently aggregating local models from clients with diverse capabilities is crucial for addressing system heterogeneity and communication efficiency.This paper proposes an evolutionary multiobjective optimisation framework for heterogeneous federated learning(MOHFL)to address these issues.Our approach elegantly formulates and solves a biobjective optimisation problem that minimises communication cost and model error rate.The decision variables in this framework comprise model sizes and client selection ratios for each Q client cluster,yielding a total of 2×Q optimisation parameters to be tuned.We develop a partition-based strategy for MOHFL that segregates clients into clusters based on their communication and computation capabilities.Additionally,we implement an adaptive model sizing mechanism that dynamically assigns appropriate subnetwork architectures to clients based on their computational constraints.We also propose a unified aggregation framework to combine models of varying sizes from heterogeneous clients effectively.Extensive experiments on multiple datasets demonstrate the effectiveness and superiority of our proposed method compared to existing approaches.
基金supported by the National Natural Science Foundation of China (12473105 and 12473106)the central government guides local funds for science and technology development (YDZJSX2024D049)the Graduate Student Practice and Innovation Program of Shanxi Province (2024SJ313)
文摘As large-scale astronomical surveys,such as the Sloan Digital Sky Survey(SDSS)and the Large Sky Area Multi-Object Fiber Spectroscopic Telescope(LAMOST),generate increasingly complex datasets,clustering algorithms have become vital for identifying patterns and classifying celestial objects.This paper systematically investigates the application of five main categories of clustering techniques-partition-based,density-based,model-based,hierarchical,and“others”-across a range of astronomical research over the past decade.This review focuses on the six key application areas of stellar classification,galaxy structure analysis,detection of galactic and interstellar features,highenergy astrophysics,exoplanet studies,and anomaly detection.This paper provides an in-depth analysis of the performance and results of each method,considering their respective suitabilities for different data types.Additionally,it presents clustering algorithm selection strategies based on the characteristics of the spectroscopic data being analyzed.We highlight challenges such as handling large datasets,the need for more efficient computational tools,and the lack of labeled data.We also underscore the potential of unsupervised and semi-supervised clustering approaches to overcome these challenges,offering insight into their practical applications,performance,and results in astronomical research.
基金supported by Key Science and Technology Program of Henan Province,China(Grant Nos.242102210147,242102210027)Fujian Province Young and Middle aged Teacher Education Research Project(Science and Technology Category)(No.JZ240101)(Corresponding author:Dong Yuan).
文摘Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively.
基金supported by the National Natural Science Foundation of China(No.62173107).
文摘Spaceborne antennas are essential for remote sensing,deep-space communication,and Earth observation,yet their trajectory planning is complicated by nonlinear base-manipulator coupling and antenna flexibility.To address these challenges,this paper proposes a multi-objective trajectory optimization framework.The system dynamics capture both nonlinear rigid-flexible coupling and antenna deformation through a reduced-order formulation.To enhance discretization efficiency,a predictive-terminal hp-adaptive pseudospectral method is employed,assigning collocation density based on task-phase characteristics:finer resolution is applied to dynamic segments requiring higher accuracy,especially near the terminal phase.This enables efficient transcription of the continuous-time problem into a Nonlinear Programming Problem(NLP).The resulting NLP is then solved using a multi-objective optimization strategy based on the nondominated sorting genetic algorithm II,which explores trade-offs among antenna pointing accuracy,energy consumption,and structural vibration.Numerical results demonstrate that the proposed method achieves a reduction of approximately 14.0% in control energy and 41.8%in peak actuation compared to a GPOPS-II baseline,while significantly enhancing vibration suppression.The resulting Pareto front reveals structured trade-offs and clustered solutions,offering robust and diverse options for precision,low-disturbance mission planning.
基金supported by Shanghai Aerospace Science and Technology Innovation Foundation(SAST2023-075)。
文摘Multichannel signals have the characteristics of information diversity and information consistency.To better explore and utilize the affinity relationship within multichannel signals,a new graph learning technique based on low rank tensor approximation is proposed for multichannel monitoring signal processing and utilization.Firstly,the affinity relationship of multichannel signals can be acquired based on the clustering results of each channel signal.Wherein an affinity tensor is constructed to integrate the diverse and consistent information of the clustering information among multichannel signals.Secondly,a low-rank tensor optimization model is built and the joint affinity matrix is optimized with the assistance of the strong confidence affinity matrix.Through solving the optimization model,the fused affinity relationship graph of multichannel signals can be obtained.Finally,the multichannel fused clustering results can be acquired though the updated joint affinity relationship graph.The multichannel signal utilization examples in health state assessment with public datasets and microwave detection with actual echoes verify the advantages and effectiveness of the proposed method.
基金supported in part by the National Natural Science Foundation of China(Grant No.61971291)the Basic Scientific Research Project of the Liaoning Provincial Department of Education(LJ212410144013)+2 种基金the Leading Talent of the‘Xing Liao Ying Cai Plan’(XLYC2202013)the Shenyang Natural Science Foundation(22-315-6-10)the Guangxuan Scholar of Shenyang Ligong University(SYLUGXXZ202205).
文摘With the popularization of smart devices,Location-Based Services(LBS)greatly facilitates users’life,but at the same time brings the risk of users’location privacy leakage.Existing location privacy protection methods are deficient,failing to reasonably allocate the privacy budget for non-outlier location points and ignoring the critical location information that may be contained in the outlier points,leading to decreased data availability and privacy exposure problems.To address these problems,this paper proposes a Mix Location Privacy Preservation Method Based on Differential Privacy with Clustering(MLDP).The method first utilizes the DBSCAN clustering algorithm to classify location points into non-outliers and outliers.For non-outliers,the scoring function is designed by combining geographic information and semantic information,and the privacy budget is allocated according to the heat intensity of the hotspot area;for outliers,the scoring function is constructed to allocate the privacy budget based on their correlation with the hotspot area.By comprehensively considering the geographic information,semantic information,and correlation with hotspot areas of the location points,a reasonable privacy budget is assigned to each location point,andfinallynoise is added throughthe Laplacemechanismto realizeprivacyprotection.Experimental results on tworeal trajectory datasets,Geolife and T-Drive,show that the MLDP approach significantly improves data availability while effectively protecting location privacy.Compared with the comparison methods,the maximum available data ratio of MLDP is 1.Moreover,compared with the RandomNoise method,its execution time is 0.056–0.061 s longer,and the logRE is 0.12951–0.62194 lower;compared with KemeansDP,QTK-DP,DPK-F,IDP-SC,and DPK-Means-up methods,it saves 0.114–0.296 s in execution time,and the logRE is 0.01112–0.38283 lower.
基金support supplied by the National Natural Science Foundation of China(Nos.72571136,72271120)the Ministry of Education of the People’s Republic of China Humanities and Social Science project(No.24YJA630087)。
文摘With the rapid development of the aviation industry,air travel has become one of the most important modes.Improving the service quality of civil aviation airports is crucial to their competitiveness.This study intends to develop a scientific and rational evaluation methodology and framework for assessing service quality in civil aviation airports,thereby providing a theoretical foundation and practical guidance for enhancing service standards in the aviation industry.First,the study constructs a CRITIC-bidirectional grey possibility clustering model,which uses the CRITIC method to determine the weights of indicators and integrates the forward grey possibility clustering model and the inverse grey possibility clustering model to determine possibility functions from two perspectives.Second,a service quality evaluation index system for civil airports is constructed from four dimensions,and the weights of each index within the system are subsequently calculated.Finally,the constructed model is applied to evaluate the service quality of nine domestic civil airports.Based on the clustering results,targeted countermeasures and suggestions are proposed.Empirical results demonstrate that,compared to the traditional grey possibility clustering model,the proposed model balances the objectivity of indicator weighting,the objectivity of possibility function construction,and the simplicity of the computational process,thereby possessing significant theoretical and practical implications.
基金supported by the Research Project of China Southern Power Grid(No.056200KK52222031).
文摘This paper proposes an equivalent modeling method for photovoltaic(PV)power stations via a particle swarm optimization(PSO)K-means clustering(KMC)algorithm with passive filter parameter clustering to address the complexities,simulation time cost and convergence problems of detailed PV power station models.First,the amplitude–frequency curves of different filter parameters are analyzed.Based on the results,a grouping parameter set for characterizing the external filter characteristics is established.These parameters are further defined as clustering parameters.A single PV inverter model is then established as a prerequisite foundation.The proposed equivalent method combines the global search capability of PSO with the rapid convergence of KMC,effectively overcoming the tendency of KMC to become trapped in local optima.This approach enhances both clustering accuracy and numerical stability when determining equivalence for PV inverter units.Using the proposed clustering method,both a detailed PV power station model and an equivalent model are developed and compared.Simulation and hardwarein-loop(HIL)results based on the equivalent model verify that the equivalent method accurately represents the dynamic characteristics of PVpower stations and adapts well to different operating conditions.The proposed equivalent modeling method provides an effective analysis tool for future renewable energy integration research.
基金National Natural Science Foundation of China,No.42301470,No.52270185,No.42171389Capacity Building Program of Local Colleges and Universities in Shanghai,No.21010503300。
文摘Rapid urbanization in China has led to spatial antagonism between urban development and farmland protection and ecological security maintenance.Multi-objective spatial collaborative optimization is a powerful method for achieving sustainable regional development.Previous studies on multi-objective spatial optimization do not involve spatial corrections to simulation results based on the natural endowment of space resources.This study proposes an Ecological Security-Food Security-Urban Sustainable Development(ES-FS-USD)spatial optimization framework.This framework combines the non-dominated sorting genetic algorithm II(NSGA-II)and patch-generating land use simulation(PLUS)model with an ecological protection importance evaluation,comprehensive agricultural productivity evaluation,and urban sustainable development potential assessment and optimizes the territorial space in the Yangtze River Delta(YRD)region in 2035.The proposed sustainable development(SD)scenario can effectively reduce the destruction of landscape patterns of various land-use types while considering both ecological and economic benefits.The simulation results were further revised by evaluating the land-use suitability of the YRD region.According to the revised spatial pattern for the YRD in 2035,the farmland area accounts for 43.59%of the total YRD,which is 5.35%less than that in 2010.Forest,grassland,and water area account for 40.46%of the total YRD—an increase of 1.42%compared with the case in 2010.Construction land accounts for 14.72%of the total YRD—an increase of 2.77%compared with the case in 2010.The ES-FS-USD spatial optimization framework ensures that spatial optimization outcomes are aligned with the natural endowments of land resources,thereby promoting the sustainable use of land resources,improving the ability of spatial management,and providing valuable insights for decision makers.
基金funded by the Research Project:THTETN.05/24-25,VietnamAcademy of Science and Technology.
文摘Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping.
文摘This paper introduces a fuzzy C-means-based pooling layer for convolutional neural networks that explicitly models local uncertainty and ambiguity.Conventional pooling operations,such as max and average,apply rigid aggregation and often discard fine-grained boundary information.In contrast,our method computes soft membershipswithin each receptive field and aggregates cluster-wise responses throughmembership-weighted pooling,thereby preserving informative structure while reducing dimensionality.Being differentiable,the proposed layer operates as standard two-dimensional pooling.We evaluate our approach across various CNN backbones and open datasets,including CIFAR-10/100,STL-10,LFW,and ImageNette,and further probe small training set restrictions on MNIST and Fashion-MNIST.In these settings,the proposed pooling consistently improves accuracy and weighted F1 over conventional baselines,with particularly strong gains when training data are scarce.Even with less than 1%of the training set,ourmethodmaintains reliable performance,indicating improved sample efficiency and robustness to noisy or ambiguous local patterns.Overall,integrating soft memberships into the pooling operator provides a practical and generalizable inductive bias that enhances robustness and generalization in modern CNN pipelines.
基金supported by the NSFC(Grant Nos.62176273,62271070,62441212)The Open Foundation of State Key Laboratory of Networking and Switching Technology(Beijing University of Posts and Telecommunications)under Grant SKLNST-2024-1-062025Major Project of the Natural Science Foundation of Inner Mongolia(2025ZD008).
文摘The Intrusion Detection System(IDS)is a security mechanism developed to observe network traffic and recognize suspicious or malicious activities.Clustering algorithms are often incorporated into IDS;however,conventional clustering-based methods face notable drawbacks,including poor scalability in handling high-dimensional datasets and a strong dependence of outcomes on initial conditions.To overcome the performance limitations of existing methods,this study proposes a novel quantum-inspired clustering algorithm that relies on a similarity coefficient-based quantum genetic algorithm(SC-QGA)and an improved quantum artificial bee colony algorithm hybrid K-means(IQABC-K).First,the SC-QGA algorithmis constructed based on quantum computing and integrates similarity coefficient theory to strengthen genetic diversity and feature extraction capabilities.For the subsequent clustering phase,the process based on the IQABC-K algorithm is enhanced with the core improvement of adaptive rotation gate and movement exploitation strategies to balance the exploration capabilities of global search and the exploitation capabilities of local search.Simultaneously,the acceleration of convergence toward the global optimum and a reduction in computational complexity are facilitated by means of the global optimum bootstrap strategy and a linear population reduction strategy.Through experimental evaluation with multiple algorithms and diverse performance metrics,the proposed algorithm confirms reliable accuracy on three datasets:KDD CUP99,NSL_KDD,and UNSW_NB15,achieving accuracy of 98.57%,98.81%,and 98.32%,respectively.These results affirm its potential as an effective solution for practical clustering applications.
基金supported by the National Natural Science Foundation of China(Grant Nos.52069029,52369026)the Belt and Road Special Foundation of National Key Laboratory of Water Disaster Preven-tion(Grant No.2023490411)+2 种基金the Yunnan Agricultural Basic Research Joint Special General Project(Grant Nos.202501BD070001-060,202401BD070001-071)Construction Project of the Yunnan Key Laboratory of Water Security(No.20254916CE340051)the Youth Talent Project of“Xingdian Talent Support Plan”in Yunnan Province(Grant No.XDYC-QNRC-2023-0412).
文摘Deformation prediction for extra-high arch dams is highly important for ensuring their safe operation.To address the challenges of complex monitoring data,the uneven spatial distribution of deformation,and the construction and optimization of a prediction model for deformation prediction,a multipoint ultrahigh arch dam deformation prediction model,namely,the CEEMDAN-KPCA-GSWOA-KELM,which is based on a clustering partition,is pro-posed.First,the monitoring data are preprocessed via variational mode decomposition(VMD)and wavelet denoising(WT),which effectively filters out noise and improves the signal-to-noise ratio of the data,providing high-quality input data for subsequent prediction models.Second,scientific cluster partitioning is performed via the K-means++algorithm to precisely capture the spatial distribution characteristics of extra-high arch dams and ensure the consistency of deformation trends at measurement points within each partition.Finally,CEEMDAN is used to separate monitoring data,predict and analyze each component,combine the KPCA(Kernel Principal Component Analysis)and the KELM(Kernel Extreme Learning Machine)optimized by the GSWOA(Global Search Whale Optimization Algorithm),integrate the predictions of each component via reconstruction methods,and precisely predict the overall trend of ultrahigh arch dam deformation.An extra high arch dam project is taken as an example and validated via a comparative analysis of multiple models.The results show that the multipoint deformation prediction model in this paper can combine data from different measurement points,achieve a comprehensive,precise prediction of the deformation situation of extra high arch dams,and provide strong technical support for safe operation.
基金supported by the National Science and Technology Council,Taiwan,under Grants 113-2221-E-260-014-MY2 and 114-2119-M-033-001.
文摘The rapid growth of mobile and Internet of Things(IoT)applications in dense urban environments places stringent demands on future Beyond 5G(B5G)or Beyond 6G(B6G)networks,which must ensure high Quality of Service(QoS)while maintaining cost-efficiency and sustainable deployment.Traditional strategies struggle with complex 3D propagation,building penetration loss,and the balance between coverage and infrastructure cost.To address this challenge,this study presents the first application of a Global-best Guided Quantum-inspired Tabu Search with Quantum-Not Gate(GQTS-QNG)framework for 3D base-station deployment optimization.The problem is formulated as a multi-objective model that simultaneously maximizes coverage and minimizes deployment cost.A binary-to-decimal encodingmechanism is designed to represent discrete placement coordinates and base station types,leveraging a quantum-inspired method to efficiently search and refine solutions within challenging combinatorial environments.Global-best guidance and tabu memory are integrated to strengthen convergence stability and avoid revisiting previously explored solutions.Simulation results across user densities ranging from 1000 to 10,000 show that GQTS-QNG consistently finds deployment configurations achieving full coverage while reducing deployment cost compared with the state-of-the-art algorithms under equal iteration times.Additionally,our method generates welldistributed and structured Pareto fronts,offering diverse planning options that allow operators to flexibly balance cost and performance requirements.These findings demonstrate that GQTS-QNG is a scalable and efficient algorithm for sustainable 3D cellular network deployment in B5G/6G urban scenarios.
基金Supported by the Korea Health Technology R&D Project through the Korea Health Industry Development Institute(KHIDI),the Ministry of Health&Welfare,Republic of Korea(No.RS-2020-KH088726)the Patient-Centered Clinical Research Coordinating Center(PACEN),the Ministry of Health and Welfare,Republic of Korea(No.HC19C0276)the National Research Foundation of Korea(NRF),the Korea Government(MSIT)(No.RS-2023-00247504).
文摘AIM:To evaluate long-term visual field(VF)prediction using K-means clustering in patients with primary open angle glaucoma(POAG).METHODS:Patients who underwent 24-2 VF tests≥10 were included in this study.Using 52 total deviation values(TDVs)from the first 10 VF tests of the training dataset,VF points were clustered into several regions using the hierarchical ordered partitioning and collapsing hybrid(HOPACH)and K-means clustering.Based on the clustering results,a linear regression analysis was applied to each clustered region of the testing dataset to predict the TDVs of the 10th VF test.Three to nine VF tests were used to predict the 10th VF test,and the prediction errors(root mean square error,RMSE)of each clustering method and pointwise linear regression(PLR)were compared.RESULTS:The training group consisted of 228 patients(mean age,54.20±14.38y;123 males and 105 females),and the testing group included 81 patients(mean age,54.88±15.22y;43 males and 38 females).All subjects were diagnosed with POAG.Fifty-two VF points were clustered into 11 and nine regions using HOPACH and K-means clustering,respectively.K-means clustering had a lower prediction error than PLR when n=1:3 and 1:4(both P≤0.003).The prediction errors of K-means clustering were lower than those of HOPACH in all sections(n=1:4 to 1:9;all P≤0.011),except for n=1:3(P=0.680).PLR outperformed K-means clustering only when n=1:8 and 1:9(both P≤0.020).CONCLUSION:K-means clustering can predict longterm VF test results more accurately in patients with POAG with limited VF data.