As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advan...As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advancing the development of perception technology in autonomous driving.To further promote the development of fusion algorithms and improve detection performance,this paper discusses the advantages and recent advancements of multimodal fusion-based object detection algorithms.Starting fromsingle-modal sensor detection,the paper provides a detailed overview of typical sensors used in autonomous driving and introduces object detection methods based on images and point clouds.For image-based detection methods,they are categorized into monocular detection and binocular detection based on different input types.For point cloud-based detection methods,they are classified into projection-based,voxel-based,point cluster-based,pillar-based,and graph structure-based approaches based on the technical pathways for processing point cloud features.Additionally,multimodal fusion algorithms are divided into Camera-LiDAR fusion,Camera-Radar fusion,Camera-LiDAR-Radar fusion,and other sensor fusion methods based on the types of sensors involved.Furthermore,the paper identifies five key future research directions in this field,aiming to provide insights for researchers engaged in multimodal fusion-based object detection algorithms and to encourage broader attention to the research and application of multimodal fusion-based object detection.展开更多
To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities...To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model.展开更多
Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or ...Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or obtaining entity related external knowledge from knowledge bases or Large Language Models(LLMs).However,these approaches ignore the poor semantic correlation between visual and textual modalities in MNER datasets and do not explore different multi-modal fusion approaches.In this paper,we present MMAVK,a multi-modal named entity recognition model with auxiliary visual knowledge and word-level fusion,which aims to leverage the Multi-modal Large Language Model(MLLM)as an implicit knowledge base.It also extracts vision-based auxiliary knowledge from the image formore accurate and effective recognition.Specifically,we propose vision-based auxiliary knowledge generation,which guides the MLLM to extract external knowledge exclusively derived from images to aid entity recognition by designing target-specific prompts,thus avoiding redundant recognition and cognitive confusion caused by the simultaneous processing of image-text pairs.Furthermore,we employ a word-level multi-modal fusion mechanism to fuse the extracted external knowledge with each word-embedding embedded from the transformerbased encoder.Extensive experimental results demonstrate that MMAVK outperforms or equals the state-of-the-art methods on the two classical MNER datasets,even when the largemodels employed have significantly fewer parameters than other baselines.展开更多
In the deployment of wireless networks in two-dimensional outdoor campus spaces,aiming at the problem of efficient coverage of the monitoring area by limited number of access points(APs),this paper proposes a deployme...In the deployment of wireless networks in two-dimensional outdoor campus spaces,aiming at the problem of efficient coverage of the monitoring area by limited number of access points(APs),this paper proposes a deployment method of multi-objective optimization with virtual force fusion bat algorithm(VFBA)using the classical four-node regular distribution as an entry point.The introduction of Lévy flight strategy for bat position updating helps to maintain the population diversity,reduce the premature maturity problem caused by population convergence,avoid the over aggregation of individuals in the local optimal region,and enhance the superiority in global search;the virtual force algorithm simulates the attraction and repulsion between individuals,which enables individual bats to precisely locate the optimal solution within the search space.At the same time,the fusion effect of virtual force prompts the bat individuals to move faster to the potential optimal solution.To validate the effectiveness of the fusion algorithm,the benchmark test function is selected for simulation testing.Finally,the simulation result verifies that the VFBA achieves superior coverage and effectively reduces node redundancy compared to the other three regular layout methods.The VFBA also shows better coverage results when compared to other optimization algorithms.展开更多
The traditional A^(*)algorithm exhibits a low efficiency in the path planning of unmanned surface vehicles(USVs).In addition,the path planned presents numerous redundant inflection waypoints,and the security is low,wh...The traditional A^(*)algorithm exhibits a low efficiency in the path planning of unmanned surface vehicles(USVs).In addition,the path planned presents numerous redundant inflection waypoints,and the security is low,which is not conducive to the control of USV and also affects navigation safety.In this paper,these problems were addressed through the following improvements.First,the path search angle and security were comprehensively considered,and a security expansion strategy of nodes based on the 5×5 neighborhood was proposed.The A^(*)algorithm search neighborhood was expanded from 3×3 to 5×5,and safe nodes were screened out for extension via the node security expansion strategy.This algorithm can also optimize path search angles while improving path security.Second,the distance from the current node to the target node was introduced into the heuristic function.The efficiency of the A^(*)algorithm was improved,and the path was smoothed using the Floyd algorithm.For the dynamic adjustment of the weight to improve the efficiency of DWA,the distance from the USV to the target point was introduced into the evaluation function of the dynamic-window approach(DWA)algorithm.Finally,combined with the local target point selection strategy,the optimized DWA algorithm was performed for local path planning.The experimental results show the smooth and safe path planned by the fusion algorithm,which can successfully avoid dynamic obstacles and is effective and feasible in path planning for USVs.展开更多
Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic ...Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained.展开更多
Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant resear...Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges.展开更多
For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual diffe...For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual differences,conventional automatic segmentation methods perform poorly.Since the success of deep learning in the segmentation of medical images has been shown in the past few years,it has been applied to this task in a number of ways.The multi-scale and multi-modal features of lumbar tissues,however,are rarely explored by methodologies of deep learning.Because of the inadequacies in medical images availability,it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples.In this paper,we propose a novel multi-modality hierarchical fusion network(MHFN)for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images.An adaptive group fusion module(AGFM)is introduced in this paper to fuse features from various modes to extract cross-modality features that could be valuable.Furthermore,to combine features from low to high levels of cross-modality,we design a hierarchical fusion structure based on AGFM.Compared to the other feature fusion methods,AGFM is more effective based on experimental results on multi-modality MR images of the lumbar spine.To further enhance segmentation accuracy,we compare our network with baseline fusion structures.Compared to the baseline fusion structures(input-level:76.27%,layer-level:78.10%,decision-level:79.14%),our network was able to segment fractured vertebrae more accurately(85.05%).展开更多
Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and ...Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases.展开更多
In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure in...In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure information has proven to be effective in fake news detection and how to combine it while reducing the noise information is critical.Unfortunately,existing approaches fail to handle these problems.This paper proposes a multi-model fake news detection framework based on Tex-modal Dominance and fusing Multiple Multi-model Cues(TD-MMC),which utilizes three valuable multi-model clues:text-model importance,text-image complementary,and text-image inconsistency.TD-MMC is dominated by textural content and assisted by image information while using social network information to enhance text representation.To reduce the irrelevant social structure’s information interference,we use a unidirectional cross-modal attention mechanism to selectively learn the social structure’s features.A cross-modal attention mechanism is adopted to obtain text-image cross-modal features while retaining textual features to reduce the loss of important information.In addition,TD-MMC employs a new multi-model loss to improve the model’s generalization ability.Extensive experiments have been conducted on two public real-world English and Chinese datasets,and the results show that our proposed model outperforms the state-of-the-art methods on classification evaluation metrics.展开更多
A memetic algorithm (MA) for a multi-mode resourceconstrained project scheduling problem (MRCPSP) is proposed. We use a new fitness function and two very effective local search procedures in the proposed MA. The f...A memetic algorithm (MA) for a multi-mode resourceconstrained project scheduling problem (MRCPSP) is proposed. We use a new fitness function and two very effective local search procedures in the proposed MA. The fitness function makes use of a mechanism called "strategic oscillation" to make the search process have a higher probability to visit solutions around a "feasible boundary". One of the local search procedures aims at improving the lower bound of project makespan to be less than a known upper bound, and another aims at improving a solution of an MRCPSP instance accepting infeasible solutions based on the new fitness function in the search process. A detailed computational experiment is set up using instances from the problem instance library PSPLIB. Computational results show that the proposed MA is very competitive with the state-of-the-art algorithms. The MA obtains improved solutions for one instance of set J30.展开更多
Fault diagnosis of rolling bearings is crucial for ensuring the stable operation of mechanical equipment and production safety in industrial environments.However,due to the nonlinearity and non-stationarity of collect...Fault diagnosis of rolling bearings is crucial for ensuring the stable operation of mechanical equipment and production safety in industrial environments.However,due to the nonlinearity and non-stationarity of collected vibration signals,single-modal methods struggle to capture fault features fully.This paper proposes a rolling bearing fault diagnosis method based on multi-modal information fusion.The method first employs the Hippopotamus Optimization Algorithm(HO)to optimize the number of modes in Variational Mode Decomposition(VMD)to achieve optimal modal decomposition performance.It combines Convolutional Neural Networks(CNN)and Gated Recurrent Units(GRU)to extract temporal features from one-dimensional time-series signals.Meanwhile,the Markovian Transition Field(MTF)is used to transform one-dimensional signals into two-dimensional images for spatial feature mining.Through visualization techniques,the effectiveness of generated images from different parameter combinations is compared to determine the optimal parameter configuration.A multi-modal network(GSTCN)is constructed by integrating Swin-Transformer and the Convolutional Block Attention Module(CBAM),where the attention module is utilized to enhance fault features.Finally,the fault features extracted from different modalities are deeply fused and fed into a fully connected layer to complete fault classification.Experimental results show that the GSTCN model achieves an average diagnostic accuracy of 99.5%across three datasets,significantly outperforming existing comparison methods.This demonstrates that the proposed model has high diagnostic precision and good generalization ability,providing an efficient and reliable solution for rolling bearing fault diagnosis.展开更多
Multi-modal knowledge graph completion(MMKGC)aims to complete missing entities or relations in multi-modal knowledge graphs,thereby discovering more previously unknown triples.Due to the continuous growth of data and ...Multi-modal knowledge graph completion(MMKGC)aims to complete missing entities or relations in multi-modal knowledge graphs,thereby discovering more previously unknown triples.Due to the continuous growth of data and knowledge and the limitations of data sources,the visual knowledge within the knowledge graphs is generally of low quality,and some entities suffer from the issue of missing visual modality.Nevertheless,previous studies of MMKGC have primarily focused on how to facilitate modality interaction and fusion while neglecting the problems of low modality quality and modality missing.In this case,mainstream MMKGC models only use pre-trained visual encoders to extract features and transfer the semantic information to the joint embeddings through modal fusion,which inevitably suffers from problems such as error propagation and increased uncertainty.To address these problems,we propose a Multi-modal knowledge graph Completion model based on Super-resolution and Detailed Description Generation(MMCSD).Specifically,we leverage a pre-trained residual network to enhance the resolution and improve the quality of the visual modality.Moreover,we design multi-level visual semantic extraction and entity description generation,thereby further extracting entity semantics from structural triples and visual images.Meanwhile,we train a variational multi-modal auto-encoder and utilize a pre-trained multi-modal language model to complement the missing visual features.We conducted experiments on FB15K-237 and DB13K,and the results showed that MMCSD can effectively perform MMKGC and achieve state-of-the-art performance.展开更多
Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status...Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status.Each of these methods contributes unique diagnostic insights,enhancing the overall assessment of patient condition.Nevertheless,the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution,data collection methods,and noise levels.While traditional models like Convolutional Neural Networks(CNNs)excel in single-modality tasks,they struggle to handle multi-modal complexities,lacking the capacity to model global relationships.This research presents a novel approach for examining multi-modal medical imagery using a transformer-based system.The framework employs self-attention and cross-attention mechanisms to synchronize and integrate features across various modalities.Additionally,it shows resilience to variations in noise and image quality,making it adaptable for real-time clinical use.To address the computational hurdles linked to transformer models,particularly in real-time clinical applications in resource-constrained environments,several optimization techniques have been integrated to boost scalability and efficiency.Initially,a streamlined transformer architecture was adopted to minimize the computational load while maintaining model effectiveness.Methods such as model pruning,quantization,and knowledge distillation have been applied to reduce the parameter count and enhance the inference speed.Furthermore,efficient attention mechanisms such as linear or sparse attention were employed to alleviate the substantial memory and processing requirements of traditional self-attention operations.For further deployment optimization,researchers have implemented hardware-aware acceleration strategies,including the use of TensorRT and ONNX-based model compression,to ensure efficient execution on edge devices.These optimizations allow the approach to function effectively in real-time clinical settings,ensuring viability even in environments with limited resources.Future research directions include integrating non-imaging data to facilitate personalized treatment and enhancing computational efficiency for implementation in resource-limited environments.This study highlights the transformative potential of transformer models in multi-modal medical imaging,offering improvements in diagnostic accuracy and patient care outcomes.展开更多
The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring ef...The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models.展开更多
This paper proposes SW-YOLO(StarNet Weighted-Conv YOLO),a lightweight human pose estimation network for edge devices.Current mainstream pose estimation algorithms are computationally inefficient and have poor feature ...This paper proposes SW-YOLO(StarNet Weighted-Conv YOLO),a lightweight human pose estimation network for edge devices.Current mainstream pose estimation algorithms are computationally inefficient and have poor feature capture capabilities for complex poses and occlusion scenarios.This work introduces a lightweight backbone architecture that integrates WConv(Weighted Convolution)and StarNet modules to address these issues.Leveraging StarNet’s superior capabilities in multi-level feature fusion and long-range dependency modeling,this architecture enhances the model’s spatial perception of human joint structures and contextual information integration.These improvements significantly enhance robustness in complex scenarios involving occlusion and deformation.Additionally,the introduction of WConv convolution operations,based on weight recalibration and receptive field optimization,dynamically adjusts feature importance during convolution.This reduces redundant computations while maintaining or enhancing feature representation capabilities at an extremely low computational cost.Consequently,SW-YOLO substantially reduces model complexity and inference latency while preserving high accuracy,significantly outperforming existing lightweight networks.展开更多
To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features e...To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features extracted synchronously by the CCAE were stacked and fed to the multi-channel convolution layers for fusion. Then, the fused data was passed to all connection layers for compression and fed to the Softmax module for classification. Finally, the coupling loss function coefficients and the network parameters were optimized through an adaptive approach using the gray wolf optimization (GWO) algorithm. Experimental comparisons showed that the proposed ADCCAE fusion model was superior to existing models for multi-mode data fusion.展开更多
The traditional EnFCM(Enhanced fuzzy C-means)algorithm only considers the grey-scale features in image segmentation,resulting in less than satisfactory results when the algorithm is used for remote sensing woodland im...The traditional EnFCM(Enhanced fuzzy C-means)algorithm only considers the grey-scale features in image segmentation,resulting in less than satisfactory results when the algorithm is used for remote sensing woodland image segmentation and extraction.An EnFCM remote sensing forest land extraction method based on PCA multi-feature fusion was proposed.Firstly,histogram equalization was applied to improve the image contrast.Secondly,the texture and edge features of the image were extracted,and a multi-feature fused pixel image was generated using the PCA technique.Moreover,the fused feature was used as a feature constraint to measure the difference of pixels instead of a single grey-scale feature.Finally,an improved feature distance metric calculated the similarity between the pixel points and the cluster center to complete the cluster segmentation.The experimental results showed that the error was between 1.5%and 4.0%compared with the forested area counted by experts’hand-drawing,which could obtain a high accuracy segmentation and extraction result.展开更多
To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in t...To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in the operator data center.Fibonacci tree optimization algorithm(FTO)is embedded into the analysis prediction and the online scheduling stages,the FTO traffic scheduling strategy is proposed.By taking the global optimal and the multi-modal optimization advantage of FTO,the traffic scheduling optimal solution and many suboptimal solutions can be obtained.The experiment results show that the FTO traffic scheduling strategy can schedule traffic in data center networks reasonably,and improve the load balancing in the operator data center network effectively.展开更多
It is evident that complex optimization problems are becoming increasingly prominent,metaheuristic algorithms have demonstrated unique advantages in solving high-dimensional,nonlinear problems.However,the traditional ...It is evident that complex optimization problems are becoming increasingly prominent,metaheuristic algorithms have demonstrated unique advantages in solving high-dimensional,nonlinear problems.However,the traditional Sparrow Search Algorithm(SSA)suffers from limited global search capability,insufficient population diversity,and slow convergence,which often leads to premature stagnation in local optima.Despite the proposal of various enhanced versions,the effective balancing of exploration and exploitation remains an unsolved challenge.To address the previously mentioned problems,this study proposes a multi-strategy collaborative improved SSA,which systematically integrates four complementary strategies:(1)the Northern Goshawk Optimization(NGO)mechanism enhances global exploration through guided prey-attacking dynamics;(2)an adaptive t-distribution mutation strategy balances the transition between exploration and exploitation via dynamic adjustment of the degrees of freedom;(3)a dual chaotic initialization method(Bernoulli and Sinusoidal maps)increases population diversity and distribution uniformity;and(4)an elite retention strategy maintains solution quality and prevents degradation during iterations.These strategies cooperate synergistically,forming a tightly coupled optimization framework that significantly improves search efficiency and robustness.Therefore,this paper names it NTSSA:A Novel Multi-Strategy Enhanced Sparrow Search Algorithm with Northern Goshawk Optimization and Adaptive t-Distribution for Global Optimization.Extensive experiments on the CEC2005 benchmark set demonstrate that NTSSA achieves theoretical optimal accuracy on unimodal functions and significantly enhances global optimum discovery for multimodal functions by 2–5 orders of magnitude.Compared with SSA,GWO,ISSA,and CSSOA,NTSSA improves solution accuracy by up to 14.3%(F8)and 99.8%(F12),while accelerating convergence by approximately 1.5–2×.The Wilcoxon rank-sum test(p<0.05)indicates that NTSSA demonstrates a statistically substantial performance advantage.Theoretical analysis demonstrates that the collaborative synergy among adaptive mutation,chaos-based diversification,and elite preservation ensures both high convergence accuracy and global stability.This work bridges a key research gap in SSA by realizing a coordinated optimization mechanism between exploration and exploitation,offering a robust and efficient solution framework for complex high-dimensional problems in intelligent computation and engineering design.展开更多
基金funded by the Yangtze River Delta Science and Technology Innovation Community Joint Research Project(2023CSJGG1600)the Natural Science Foundation of Anhui Province(2208085MF173)Wuhu“ChiZhu Light”Major Science and Technology Project(2023ZD01,2023ZD03).
文摘As the number and complexity of sensors in autonomous vehicles continue to rise,multimodal fusionbased object detection algorithms are increasingly being used to detect 3D environmental information,significantly advancing the development of perception technology in autonomous driving.To further promote the development of fusion algorithms and improve detection performance,this paper discusses the advantages and recent advancements of multimodal fusion-based object detection algorithms.Starting fromsingle-modal sensor detection,the paper provides a detailed overview of typical sensors used in autonomous driving and introduces object detection methods based on images and point clouds.For image-based detection methods,they are categorized into monocular detection and binocular detection based on different input types.For point cloud-based detection methods,they are classified into projection-based,voxel-based,point cluster-based,pillar-based,and graph structure-based approaches based on the technical pathways for processing point cloud features.Additionally,multimodal fusion algorithms are divided into Camera-LiDAR fusion,Camera-Radar fusion,Camera-LiDAR-Radar fusion,and other sensor fusion methods based on the types of sensors involved.Furthermore,the paper identifies five key future research directions in this field,aiming to provide insights for researchers engaged in multimodal fusion-based object detection algorithms and to encourage broader attention to the research and application of multimodal fusion-based object detection.
基金partially supported by the National Natural Science Foundation of China under Grants 62471493 and 62402257(for conceptualization and investigation)partially supported by the Natural Science Foundation of Shandong Province,China under Grants ZR2023LZH017,ZR2024MF066,and 2023QF025(for formal analysis and validation)+1 种基金partially supported by the Open Foundation of Key Laboratory of Computing Power Network and Information Security,Ministry of Education,Qilu University of Technology(Shandong Academy of Sciences)under Grant 2023ZD010(for methodology and model design)partially supported by the Russian Science Foundation(RSF)Project under Grant 22-71-10095-P(for validation and results verification).
文摘To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model.
基金funded by Research Project,grant number BHQ090003000X03.
文摘Multi-modal Named Entity Recognition(MNER)aims to better identify meaningful textual entities by integrating information from images.Previous work has focused on extracting visual semantics at a fine-grained level,or obtaining entity related external knowledge from knowledge bases or Large Language Models(LLMs).However,these approaches ignore the poor semantic correlation between visual and textual modalities in MNER datasets and do not explore different multi-modal fusion approaches.In this paper,we present MMAVK,a multi-modal named entity recognition model with auxiliary visual knowledge and word-level fusion,which aims to leverage the Multi-modal Large Language Model(MLLM)as an implicit knowledge base.It also extracts vision-based auxiliary knowledge from the image formore accurate and effective recognition.Specifically,we propose vision-based auxiliary knowledge generation,which guides the MLLM to extract external knowledge exclusively derived from images to aid entity recognition by designing target-specific prompts,thus avoiding redundant recognition and cognitive confusion caused by the simultaneous processing of image-text pairs.Furthermore,we employ a word-level multi-modal fusion mechanism to fuse the extracted external knowledge with each word-embedding embedded from the transformerbased encoder.Extensive experimental results demonstrate that MMAVK outperforms or equals the state-of-the-art methods on the two classical MNER datasets,even when the largemodels employed have significantly fewer parameters than other baselines.
基金supported in part by the National Natural Science Foundation of China under Grant No.62271453in part by the National Natural Science Foundation of China No.62101512+2 种基金in part by the Central Support for Local Projects under Grant No.YDZJSX2024D031in part by Project supported by the Shanxi Provincial Foundation for Leaders of Disciplines in Science,China under Grant No.2024Q022in part by Shanxi Province Patent Conversion Special Plan Funding Projects under Grant No.202405004。
文摘In the deployment of wireless networks in two-dimensional outdoor campus spaces,aiming at the problem of efficient coverage of the monitoring area by limited number of access points(APs),this paper proposes a deployment method of multi-objective optimization with virtual force fusion bat algorithm(VFBA)using the classical four-node regular distribution as an entry point.The introduction of Lévy flight strategy for bat position updating helps to maintain the population diversity,reduce the premature maturity problem caused by population convergence,avoid the over aggregation of individuals in the local optimal region,and enhance the superiority in global search;the virtual force algorithm simulates the attraction and repulsion between individuals,which enables individual bats to precisely locate the optimal solution within the search space.At the same time,the fusion effect of virtual force prompts the bat individuals to move faster to the potential optimal solution.To validate the effectiveness of the fusion algorithm,the benchmark test function is selected for simulation testing.Finally,the simulation result verifies that the VFBA achieves superior coverage and effectively reduces node redundancy compared to the other three regular layout methods.The VFBA also shows better coverage results when compared to other optimization algorithms.
基金Supported by the EDD of China(No.80912020104)the Science and Technology Commission of Shanghai Municipality(No.22ZR1427700 and No.23692106900).
文摘The traditional A^(*)algorithm exhibits a low efficiency in the path planning of unmanned surface vehicles(USVs).In addition,the path planned presents numerous redundant inflection waypoints,and the security is low,which is not conducive to the control of USV and also affects navigation safety.In this paper,these problems were addressed through the following improvements.First,the path search angle and security were comprehensively considered,and a security expansion strategy of nodes based on the 5×5 neighborhood was proposed.The A^(*)algorithm search neighborhood was expanded from 3×3 to 5×5,and safe nodes were screened out for extension via the node security expansion strategy.This algorithm can also optimize path search angles while improving path security.Second,the distance from the current node to the target node was introduced into the heuristic function.The efficiency of the A^(*)algorithm was improved,and the path was smoothed using the Floyd algorithm.For the dynamic adjustment of the weight to improve the efficiency of DWA,the distance from the USV to the target point was introduced into the evaluation function of the dynamic-window approach(DWA)algorithm.Finally,combined with the local target point selection strategy,the optimized DWA algorithm was performed for local path planning.The experimental results show the smooth and safe path planned by the fusion algorithm,which can successfully avoid dynamic obstacles and is effective and feasible in path planning for USVs.
基金supported by the National Natural Science Foundation of China under Grant Nos.U21A20464,62066005Innovation Project of Guangxi Graduate Education under Grant No.YCSW2024313.
文摘Wireless sensor network deployment optimization is a classic NP-hard problem and a popular topic in academic research.However,the current research on wireless sensor network deployment problems uses overly simplistic models,and there is a significant gap between the research results and actual wireless sensor networks.Some scholars have now modeled data fusion networks to make them more suitable for practical applications.This paper will explore the deployment problem of a stochastic data fusion wireless sensor network(SDFWSN),a model that reflects the randomness of environmental monitoring and uses data fusion techniques widely used in actual sensor networks for information collection.The deployment problem of SDFWSN is modeled as a multi-objective optimization problem.The network life cycle,spatiotemporal coverage,detection rate,and false alarm rate of SDFWSN are used as optimization objectives to optimize the deployment of network nodes.This paper proposes an enhanced multi-objective mongoose optimization algorithm(EMODMOA)to solve the deployment problem of SDFWSN.First,to overcome the shortcomings of the DMOA algorithm,such as its low convergence and tendency to get stuck in a local optimum,an encircling and hunting strategy is introduced into the original algorithm to propose the EDMOA algorithm.The EDMOA algorithm is designed as the EMODMOA algorithm by selecting reference points using the K-Nearest Neighbor(KNN)algorithm.To verify the effectiveness of the proposed algorithm,the EMODMOA algorithm was tested at CEC 2020 and achieved good results.In the SDFWSN deployment problem,the algorithm was compared with the Non-dominated Sorting Genetic Algorithm II(NSGAII),Multiple Objective Particle Swarm Optimization(MOPSO),Multi-Objective Evolutionary Algorithm based on Decomposition(MOEA/D),and Multi-Objective Grey Wolf Optimizer(MOGWO).By comparing and analyzing the performance evaluation metrics and optimization results of the objective functions of the multi-objective algorithms,the algorithm outperforms the other algorithms in the SDFWSN deployment results.To better demonstrate the superiority of the algorithm,simulations of diverse test cases were also performed,and good results were obtained.
基金supported by the Natural Science Foundation of Liaoning Province(Grant No.2023-MSBA-070)the National Natural Science Foundation of China(Grant No.62302086).
文摘Multi-modal fusion technology gradually become a fundamental task in many fields,such as autonomous driving,smart healthcare,sentiment analysis,and human-computer interaction.It is rapidly becoming the dominant research due to its powerful perception and judgment capabilities.Under complex scenes,multi-modal fusion technology utilizes the complementary characteristics of multiple data streams to fuse different data types and achieve more accurate predictions.However,achieving outstanding performance is challenging because of equipment performance limitations,missing information,and data noise.This paper comprehensively reviews existing methods based onmulti-modal fusion techniques and completes a detailed and in-depth analysis.According to the data fusion stage,multi-modal fusion has four primary methods:early fusion,deep fusion,late fusion,and hybrid fusion.The paper surveys the three majormulti-modal fusion technologies that can significantly enhance the effect of data fusion and further explore the applications of multi-modal fusion technology in various fields.Finally,it discusses the challenges and explores potential research opportunities.Multi-modal tasks still need intensive study because of data heterogeneity and quality.Preserving complementary information and eliminating redundant information between modalities is critical in multi-modal technology.Invalid data fusion methods may introduce extra noise and lead to worse results.This paper provides a comprehensive and detailed summary in response to these challenges.
基金supported in part by the Technology Innovation 2030 under Grant 2022ZD0211700.
文摘For the analysis of spinal and disc diseases,automated tissue segmentation of the lumbar spine is vital.Due to the continuous and concentrated location of the target,the abundance of edge features,and individual differences,conventional automatic segmentation methods perform poorly.Since the success of deep learning in the segmentation of medical images has been shown in the past few years,it has been applied to this task in a number of ways.The multi-scale and multi-modal features of lumbar tissues,however,are rarely explored by methodologies of deep learning.Because of the inadequacies in medical images availability,it is crucial to effectively fuse various modes of data collection for model training to alleviate the problem of insufficient samples.In this paper,we propose a novel multi-modality hierarchical fusion network(MHFN)for improving lumbar spine segmentation by learning robust feature representations from multi-modality magnetic resonance images.An adaptive group fusion module(AGFM)is introduced in this paper to fuse features from various modes to extract cross-modality features that could be valuable.Furthermore,to combine features from low to high levels of cross-modality,we design a hierarchical fusion structure based on AGFM.Compared to the other feature fusion methods,AGFM is more effective based on experimental results on multi-modality MR images of the lumbar spine.To further enhance segmentation accuracy,we compare our network with baseline fusion structures.Compared to the baseline fusion structures(input-level:76.27%,layer-level:78.10%,decision-level:79.14%),our network was able to segment fractured vertebrae more accurately(85.05%).
基金funded by the National Natural Science Foundation of China(61991413)the China Postdoctoral Science Foundation(2019M651142)+1 种基金the Natural Science Foundation of Liaoning Province(2021-KF-12-07)the Natural Science Foundations of Liaoning Province(2023-MS-322).
文摘Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases.
基金This research was funded by the General Project of Philosophy and Social Science of Heilongjiang Province,Grant Number:20SHB080.
文摘In recent years,how to efficiently and accurately identify multi-model fake news has become more challenging.First,multi-model data provides more evidence but not all are equally important.Secondly,social structure information has proven to be effective in fake news detection and how to combine it while reducing the noise information is critical.Unfortunately,existing approaches fail to handle these problems.This paper proposes a multi-model fake news detection framework based on Tex-modal Dominance and fusing Multiple Multi-model Cues(TD-MMC),which utilizes three valuable multi-model clues:text-model importance,text-image complementary,and text-image inconsistency.TD-MMC is dominated by textural content and assisted by image information while using social network information to enhance text representation.To reduce the irrelevant social structure’s information interference,we use a unidirectional cross-modal attention mechanism to selectively learn the social structure’s features.A cross-modal attention mechanism is adopted to obtain text-image cross-modal features while retaining textual features to reduce the loss of important information.In addition,TD-MMC employs a new multi-model loss to improve the model’s generalization ability.Extensive experiments have been conducted on two public real-world English and Chinese datasets,and the results show that our proposed model outperforms the state-of-the-art methods on classification evaluation metrics.
基金supported by the National Natural Science Foundation of China(71171038)
文摘A memetic algorithm (MA) for a multi-mode resourceconstrained project scheduling problem (MRCPSP) is proposed. We use a new fitness function and two very effective local search procedures in the proposed MA. The fitness function makes use of a mechanism called "strategic oscillation" to make the search process have a higher probability to visit solutions around a "feasible boundary". One of the local search procedures aims at improving the lower bound of project makespan to be less than a known upper bound, and another aims at improving a solution of an MRCPSP instance accepting infeasible solutions based on the new fitness function in the search process. A detailed computational experiment is set up using instances from the problem instance library PSPLIB. Computational results show that the proposed MA is very competitive with the state-of-the-art algorithms. The MA obtains improved solutions for one instance of set J30.
基金funded by the Jilin Provincial Department of Science and Technology,grant number 20230101208JC.
文摘Fault diagnosis of rolling bearings is crucial for ensuring the stable operation of mechanical equipment and production safety in industrial environments.However,due to the nonlinearity and non-stationarity of collected vibration signals,single-modal methods struggle to capture fault features fully.This paper proposes a rolling bearing fault diagnosis method based on multi-modal information fusion.The method first employs the Hippopotamus Optimization Algorithm(HO)to optimize the number of modes in Variational Mode Decomposition(VMD)to achieve optimal modal decomposition performance.It combines Convolutional Neural Networks(CNN)and Gated Recurrent Units(GRU)to extract temporal features from one-dimensional time-series signals.Meanwhile,the Markovian Transition Field(MTF)is used to transform one-dimensional signals into two-dimensional images for spatial feature mining.Through visualization techniques,the effectiveness of generated images from different parameter combinations is compared to determine the optimal parameter configuration.A multi-modal network(GSTCN)is constructed by integrating Swin-Transformer and the Convolutional Block Attention Module(CBAM),where the attention module is utilized to enhance fault features.Finally,the fault features extracted from different modalities are deeply fused and fed into a fully connected layer to complete fault classification.Experimental results show that the GSTCN model achieves an average diagnostic accuracy of 99.5%across three datasets,significantly outperforming existing comparison methods.This demonstrates that the proposed model has high diagnostic precision and good generalization ability,providing an efficient and reliable solution for rolling bearing fault diagnosis.
基金funded by Research Project,grant number BHQ090003000X03。
文摘Multi-modal knowledge graph completion(MMKGC)aims to complete missing entities or relations in multi-modal knowledge graphs,thereby discovering more previously unknown triples.Due to the continuous growth of data and knowledge and the limitations of data sources,the visual knowledge within the knowledge graphs is generally of low quality,and some entities suffer from the issue of missing visual modality.Nevertheless,previous studies of MMKGC have primarily focused on how to facilitate modality interaction and fusion while neglecting the problems of low modality quality and modality missing.In this case,mainstream MMKGC models only use pre-trained visual encoders to extract features and transfer the semantic information to the joint embeddings through modal fusion,which inevitably suffers from problems such as error propagation and increased uncertainty.To address these problems,we propose a Multi-modal knowledge graph Completion model based on Super-resolution and Detailed Description Generation(MMCSD).Specifically,we leverage a pre-trained residual network to enhance the resolution and improve the quality of the visual modality.Moreover,we design multi-level visual semantic extraction and entity description generation,thereby further extracting entity semantics from structural triples and visual images.Meanwhile,we train a variational multi-modal auto-encoder and utilize a pre-trained multi-modal language model to complement the missing visual features.We conducted experiments on FB15K-237 and DB13K,and the results showed that MMCSD can effectively perform MMKGC and achieve state-of-the-art performance.
基金supported by the Deanship of Research and Graduate Studies at King Khalid University under Small Research Project grant number RGP1/139/45.
文摘Integrating multiple medical imaging techniques,including Magnetic Resonance Imaging(MRI),Computed Tomography,Positron Emission Tomography(PET),and ultrasound,provides a comprehensive view of the patient health status.Each of these methods contributes unique diagnostic insights,enhancing the overall assessment of patient condition.Nevertheless,the amalgamation of data from multiple modalities presents difficulties due to disparities in resolution,data collection methods,and noise levels.While traditional models like Convolutional Neural Networks(CNNs)excel in single-modality tasks,they struggle to handle multi-modal complexities,lacking the capacity to model global relationships.This research presents a novel approach for examining multi-modal medical imagery using a transformer-based system.The framework employs self-attention and cross-attention mechanisms to synchronize and integrate features across various modalities.Additionally,it shows resilience to variations in noise and image quality,making it adaptable for real-time clinical use.To address the computational hurdles linked to transformer models,particularly in real-time clinical applications in resource-constrained environments,several optimization techniques have been integrated to boost scalability and efficiency.Initially,a streamlined transformer architecture was adopted to minimize the computational load while maintaining model effectiveness.Methods such as model pruning,quantization,and knowledge distillation have been applied to reduce the parameter count and enhance the inference speed.Furthermore,efficient attention mechanisms such as linear or sparse attention were employed to alleviate the substantial memory and processing requirements of traditional self-attention operations.For further deployment optimization,researchers have implemented hardware-aware acceleration strategies,including the use of TensorRT and ONNX-based model compression,to ensure efficient execution on edge devices.These optimizations allow the approach to function effectively in real-time clinical settings,ensuring viability even in environments with limited resources.Future research directions include integrating non-imaging data to facilitate personalized treatment and enhancing computational efficiency for implementation in resource-limited environments.This study highlights the transformative potential of transformer models in multi-modal medical imaging,offering improvements in diagnostic accuracy and patient care outcomes.
基金supported by the National Natural Science Foundation of China(Grant Nos.62071315 and 62271336).
文摘The multi-modal characteristics of mineral particles play a pivotal role in enhancing the classification accuracy,which is critical for obtaining a profound understanding of the Earth's composition and ensuring effective exploitation utilization of its resources.However,the existing methods for classifying mineral particles do not fully utilize these multi-modal features,thereby limiting the classification accuracy.Furthermore,when conventional multi-modal image classification methods are applied to planepolarized and cross-polarized sequence images of mineral particles,they encounter issues such as information loss,misaligned features,and challenges in spatiotemporal feature extraction.To address these challenges,we propose a multi-modal mineral particle polarization image classification network(MMGC-Net)for precise mineral particle classification.Initially,MMGC-Net employs a two-dimensional(2D)backbone network with shared parameters to extract features from two types of polarized images to ensure feature alignment.Subsequently,a cross-polarized intra-modal feature fusion module is designed to refine the spatiotemporal features from the extracted features of the cross-polarized sequence images.Ultimately,the inter-modal feature fusion module integrates the two types of modal features to enhance the classification precision.Quantitative and qualitative experimental results indicate that when compared with the current state-of-the-art multi-modal image classification methods,MMGC-Net demonstrates marked superiority in terms of mineral particle multi-modal feature learning and four classification evaluation metrics.It also demonstrates better stability than the existing models.
文摘This paper proposes SW-YOLO(StarNet Weighted-Conv YOLO),a lightweight human pose estimation network for edge devices.Current mainstream pose estimation algorithms are computationally inefficient and have poor feature capture capabilities for complex poses and occlusion scenarios.This work introduces a lightweight backbone architecture that integrates WConv(Weighted Convolution)and StarNet modules to address these issues.Leveraging StarNet’s superior capabilities in multi-level feature fusion and long-range dependency modeling,this architecture enhances the model’s spatial perception of human joint structures and contextual information integration.These improvements significantly enhance robustness in complex scenarios involving occlusion and deformation.Additionally,the introduction of WConv convolution operations,based on weight recalibration and receptive field optimization,dynamically adjusts feature importance during convolution.This reduces redundant computations while maintaining or enhancing feature representation capabilities at an extremely low computational cost.Consequently,SW-YOLO substantially reduces model complexity and inference latency while preserving high accuracy,significantly outperforming existing lightweight networks.
文摘To address the difficulties in fusing multi-mode sensor data for complex industrial machinery, an adaptive deep coupling convolutional auto-encoder (ADCCAE) fusion method was proposed. First, the multi-mode features extracted synchronously by the CCAE were stacked and fed to the multi-channel convolution layers for fusion. Then, the fused data was passed to all connection layers for compression and fed to the Softmax module for classification. Finally, the coupling loss function coefficients and the network parameters were optimized through an adaptive approach using the gray wolf optimization (GWO) algorithm. Experimental comparisons showed that the proposed ADCCAE fusion model was superior to existing models for multi-mode data fusion.
基金supported by National Natural Science Foundation of China(No.61761027)Gansu Young Doctor’s Fund for Higher Education Institutions(No.2021QB-053)。
文摘The traditional EnFCM(Enhanced fuzzy C-means)algorithm only considers the grey-scale features in image segmentation,resulting in less than satisfactory results when the algorithm is used for remote sensing woodland image segmentation and extraction.An EnFCM remote sensing forest land extraction method based on PCA multi-feature fusion was proposed.Firstly,histogram equalization was applied to improve the image contrast.Secondly,the texture and edge features of the image were extracted,and a multi-feature fused pixel image was generated using the PCA technique.Moreover,the fused feature was used as a feature constraint to measure the difference of pixels instead of a single grey-scale feature.Finally,an improved feature distance metric calculated the similarity between the pixel points and the cluster center to complete the cluster segmentation.The experimental results showed that the error was between 1.5%and 4.0%compared with the forested area counted by experts’hand-drawing,which could obtain a high accuracy segmentation and extraction result.
基金supported by National Natural Science Foundation of China(No.62163036).
文摘To improve the traffic scheduling capability in operator data center networks,an analysis prediction and online scheduling mechanism(APOS)is designed,considering both the network structure and the network traffic in the operator data center.Fibonacci tree optimization algorithm(FTO)is embedded into the analysis prediction and the online scheduling stages,the FTO traffic scheduling strategy is proposed.By taking the global optimal and the multi-modal optimization advantage of FTO,the traffic scheduling optimal solution and many suboptimal solutions can be obtained.The experiment results show that the FTO traffic scheduling strategy can schedule traffic in data center networks reasonably,and improve the load balancing in the operator data center network effectively.
文摘It is evident that complex optimization problems are becoming increasingly prominent,metaheuristic algorithms have demonstrated unique advantages in solving high-dimensional,nonlinear problems.However,the traditional Sparrow Search Algorithm(SSA)suffers from limited global search capability,insufficient population diversity,and slow convergence,which often leads to premature stagnation in local optima.Despite the proposal of various enhanced versions,the effective balancing of exploration and exploitation remains an unsolved challenge.To address the previously mentioned problems,this study proposes a multi-strategy collaborative improved SSA,which systematically integrates four complementary strategies:(1)the Northern Goshawk Optimization(NGO)mechanism enhances global exploration through guided prey-attacking dynamics;(2)an adaptive t-distribution mutation strategy balances the transition between exploration and exploitation via dynamic adjustment of the degrees of freedom;(3)a dual chaotic initialization method(Bernoulli and Sinusoidal maps)increases population diversity and distribution uniformity;and(4)an elite retention strategy maintains solution quality and prevents degradation during iterations.These strategies cooperate synergistically,forming a tightly coupled optimization framework that significantly improves search efficiency and robustness.Therefore,this paper names it NTSSA:A Novel Multi-Strategy Enhanced Sparrow Search Algorithm with Northern Goshawk Optimization and Adaptive t-Distribution for Global Optimization.Extensive experiments on the CEC2005 benchmark set demonstrate that NTSSA achieves theoretical optimal accuracy on unimodal functions and significantly enhances global optimum discovery for multimodal functions by 2–5 orders of magnitude.Compared with SSA,GWO,ISSA,and CSSOA,NTSSA improves solution accuracy by up to 14.3%(F8)and 99.8%(F12),while accelerating convergence by approximately 1.5–2×.The Wilcoxon rank-sum test(p<0.05)indicates that NTSSA demonstrates a statistically substantial performance advantage.Theoretical analysis demonstrates that the collaborative synergy among adaptive mutation,chaos-based diversification,and elite preservation ensures both high convergence accuracy and global stability.This work bridges a key research gap in SSA by realizing a coordinated optimization mechanism between exploration and exploitation,offering a robust and efficient solution framework for complex high-dimensional problems in intelligent computation and engineering design.