期刊文献+
共找到156,729篇文章
< 1 2 250 >
每页显示 20 50 100
Robust Symmetry Prediction with Multi-Modal Feature Fusion for Partial Shapes
1
作者 Junhua Xi Kouquan Zheng +3 位作者 Yifan Zhong Longjiang Li Zhiping Cai Jinjing Chen 《Intelligent Automation & Soft Computing》 SCIE 2023年第3期3099-3111,共13页
In geometry processing,symmetry research benefits from global geo-metric features of complete shapes,but the shape of an object captured in real-world applications is often incomplete due to the limited sensor resoluti... In geometry processing,symmetry research benefits from global geo-metric features of complete shapes,but the shape of an object captured in real-world applications is often incomplete due to the limited sensor resolution,single viewpoint,and occlusion.Different from the existing works predicting symmetry from the complete shape,we propose a learning approach for symmetry predic-tion based on a single RGB-D image.Instead of directly predicting the symmetry from incomplete shapes,our method consists of two modules,i.e.,the multi-mod-al feature fusion module and the detection-by-reconstruction module.Firstly,we build a channel-transformer network(CTN)to extract cross-fusion features from the RGB-D as the multi-modal feature fusion module,which helps us aggregate features from the color and the depth separately.Then,our self-reconstruction net-work based on a 3D variational auto-encoder(3D-VAE)takes the global geo-metric features as input,followed by a prediction symmetry network to detect the symmetry.Our experiments are conducted on three public datasets:ShapeNet,YCB,and ScanNet,we demonstrate that our method can produce reliable and accurate results. 展开更多
关键词 Symmetry prediction multi-modal feature fusion partial shapes
在线阅读 下载PDF
Cryptomining Malware Detection Based on Edge Computing-Oriented Multi-Modal Features Deep Learning 被引量:2
2
作者 Wenjuan Lian Guoqing Nie +2 位作者 Yanyan Kang Bin Jia Yang Zhang 《China Communications》 SCIE CSCD 2022年第2期174-185,共12页
In recent years,with the increase in the price of cryptocurrencies,the number of malicious cryptomining software has increased significantly.With their powerful spreading ability,cryptomining malware can unknowingly o... In recent years,with the increase in the price of cryptocurrencies,the number of malicious cryptomining software has increased significantly.With their powerful spreading ability,cryptomining malware can unknowingly occupy our resources,harm our interests,and damage more legitimate assets.However,although current traditional rule-based malware detection methods have a low false alarm rate,they have a relatively low detection rate when faced with a large volume of emerging malware.Even though common machine learning-based or deep learning-based methods have certain ability to learn and detect unknown malware,the characteristics they learn are single and independent,and cannot be learned adaptively.Aiming at the above problems,we propose a deep learning model with multi-input of multi-modal features,which can simultaneously accept digital features and image features on different dimensions.The model in turn includes parallel learning of three sub-models and ensemble learning of another specific sub-model.The four sub-models can be processed in parallel on different devices and can be further applied to edge computing environments.The model can adaptively learn multi-modal features and output prediction results.The detection rate of our model is as high as 97.01%and the false alarm rate is only 0.63%.The experimental results prove the advantage and effectiveness of the proposed method. 展开更多
关键词 cryptomining malware multi-modal ensemble learning deep learning edge computing
在线阅读 下载PDF
Test method of laser paint removal based on multi-modal feature fusion 被引量:1
3
作者 HUANG Hai-peng HAO Ben-tian +2 位作者 YE De-jun GAO Hao LI Liang 《Journal of Central South University》 SCIE EI CAS CSCD 2022年第10期3385-3398,共14页
Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion net... Laser cleaning is a highly nonlinear physical process for solving poor single-modal(e.g., acoustic or vision)detection performance and low inter-information utilization. In this study, a multi-modal feature fusion network model was constructed based on a laser paint removal experiment. The alignment of heterogeneous data under different modals was solved by combining the piecewise aggregate approximation and gramian angular field. Moreover, the attention mechanism was introduced to optimize the dual-path network and dense connection network, enabling the sampling characteristics to be extracted and integrated. Consequently, the multi-modal discriminant detection of laser paint removal was realized. According to the experimental results, the verification accuracy of the constructed model on the experimental dataset was 99.17%, which is 5.77% higher than the optimal single-modal detection results of the laser paint removal. The feature extraction network was optimized by the attention mechanism, and the model accuracy was increased by 3.3%. Results verify the improved classification performance of the constructed multi-modal feature fusion model in detecting laser paint removal, the effective integration of acoustic data and visual image data, and the accurate detection of laser paint removal. 展开更多
关键词 laser cleaning multi-modal fusion image processing deep learning
在线阅读 下载PDF
Adaptive multi-modal feature fusion for far and hard object detection
4
作者 LI Yang GE Hongwei 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2021年第2期232-241,共10页
In order to solve difficult detection of far and hard objects due to the sparseness and insufficient semantic information of LiDAR point cloud,a 3D object detection network with multi-modal data adaptive fusion is pro... In order to solve difficult detection of far and hard objects due to the sparseness and insufficient semantic information of LiDAR point cloud,a 3D object detection network with multi-modal data adaptive fusion is proposed,which makes use of multi-neighborhood information of voxel and image information.Firstly,design an improved ResNet that maintains the structure information of far and hard objects in low-resolution feature maps,which is more suitable for detection task.Meanwhile,semantema of each image feature map is enhanced by semantic information from all subsequent feature maps.Secondly,extract multi-neighborhood context information with different receptive field sizes to make up for the defect of sparseness of point cloud which improves the ability of voxel features to represent the spatial structure and semantic information of objects.Finally,propose a multi-modal feature adaptive fusion strategy which uses learnable weights to express the contribution of different modal features to the detection task,and voxel attention further enhances the fused feature expression of effective target objects.The experimental results on the KITTI benchmark show that this method outperforms VoxelNet with remarkable margins,i.e.increasing the AP by 8.78%and 5.49%on medium and hard difficulty levels.Meanwhile,our method achieves greater detection performance compared with many mainstream multi-modal methods,i.e.outperforming the AP by 1%compared with that of MVX-Net on medium and hard difficulty levels. 展开更多
关键词 3D object detection adaptive fusion multi-modal data fusion attention mechanism multi-neighborhood features
在线阅读 下载PDF
Video-Based Deception Detection with Non-Contact Heart Rate Monitoring and Multi-Modal Feature Selection
5
作者 Yanfeng Li Jincheng Bian +1 位作者 Yiqun Gao Rencheng Song 《Journal of Beijing Institute of Technology》 EI CAS 2024年第3期175-185,共11页
Deception detection plays a crucial role in criminal investigation.Videos contain a wealth of information regarding apparent and physiological changes in individuals,and thus can serve as an effective means of decepti... Deception detection plays a crucial role in criminal investigation.Videos contain a wealth of information regarding apparent and physiological changes in individuals,and thus can serve as an effective means of deception detection.In this paper,we investigate video-based deception detection considering both apparent visual features such as eye gaze,head pose and facial action unit(AU),and non-contact heart rate detected by remote photoplethysmography(rPPG)technique.Multiple wrapper-based feature selection methods combined with the K-nearest neighbor(KNN)and support vector machine(SVM)classifiers are employed to screen the most effective features for deception detection.We evaluate the performance of the proposed method on both a self-collected physiological-assisted visual deception detection(PV3D)dataset and a public bag-oflies(BOL)dataset.Experimental results demonstrate that the SVM classifier with symbiotic organisms search(SOS)feature selection yields the best overall performance,with an area under the curve(AUC)of 83.27%and accuracy(ACC)of 83.33%for PV3D,and an AUC of 71.18%and ACC of 70.33%for BOL.This demonstrates the stability and effectiveness of the proposed method in video-based deception detection tasks. 展开更多
关键词 deception detection apparent visual features remote photoplethysmography non-contact heart rate feature selection
在线阅读 下载PDF
Tomato Growth Height Prediction Method by Phenotypic Feature Extraction Using Multi-modal Data
6
作者 GONG Yu WANG Ling +3 位作者 ZHAO Rongqiang YOU Haibo ZHOU Mo LIU Jie 《智慧农业(中英文)》 2025年第1期97-110,共14页
[Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-base... [Objective]Accurate prediction of tomato growth height is crucial for optimizing production environments in smart farming.However,current prediction methods predominantly rely on empirical,mechanistic,or learning-based models that utilize either images data or environmental data.These methods fail to fully leverage multi-modal data to capture the diverse aspects of plant growth comprehensively.[Methods]To address this limitation,a two-stage phenotypic feature extraction(PFE)model based on deep learning algorithm of recurrent neural network(RNN)and long short-term memory(LSTM)was developed.The model integrated environment and plant information to provide a holistic understanding of the growth process,emploied phenotypic and temporal feature extractors to comprehensively capture both types of features,enabled a deeper understanding of the interaction between tomato plants and their environment,ultimately leading to highly accurate predictions of growth height.[Results and Discussions]The experimental results showed the model's ef‐fectiveness:When predicting the next two days based on the past five days,the PFE-based RNN and LSTM models achieved mean absolute percentage error(MAPE)of 0.81%and 0.40%,respectively,which were significantly lower than the 8.00%MAPE of the large language model(LLM)and 6.72%MAPE of the Transformer-based model.In longer-term predictions,the 10-day prediction for 4 days ahead and the 30-day prediction for 12 days ahead,the PFE-RNN model continued to outperform the other two baseline models,with MAPE of 2.66%and 14.05%,respectively.[Conclusions]The proposed method,which leverages phenotypic-temporal collaboration,shows great potential for intelligent,data-driven management of tomato cultivation,making it a promising approach for enhancing the efficiency and precision of smart tomato planting management. 展开更多
关键词 tomato growth prediction deep learning phenotypic feature extraction multi-modal data recurrent neural net‐work long short-term memory large language model
在线阅读 下载PDF
Efficient Arabic Essay Scoring with Hybrid Models: Feature Selection, Data Optimization, and Performance Trade-Offs
7
作者 Mohamed Ezz Meshrif Alruily +4 位作者 Ayman Mohamed Mostafa Alaa SAlaerjan Bader Aldughayfiq Hisham Allahem Abdulaziz Shehab 《Computers, Materials & Continua》 2026年第1期2274-2301,共28页
Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic... Automated essay scoring(AES)systems have gained significant importance in educational settings,offering a scalable,efficient,and objective method for evaluating student essays.However,developing AES systems for Arabic poses distinct challenges due to the language’s complex morphology,diglossia,and the scarcity of annotated datasets.This paper presents a hybrid approach to Arabic AES by combining text-based,vector-based,and embeddingbased similarity measures to improve essay scoring accuracy while minimizing the training data required.Using a large Arabic essay dataset categorized into thematic groups,the study conducted four experiments to evaluate the impact of feature selection,data size,and model performance.Experiment 1 established a baseline using a non-machine learning approach,selecting top-N correlated features to predict essay scores.The subsequent experiments employed 5-fold cross-validation.Experiment 2 showed that combining embedding-based,text-based,and vector-based features in a Random Forest(RF)model achieved an R2 of 88.92%and an accuracy of 83.3%within a 0.5-point tolerance.Experiment 3 further refined the feature selection process,demonstrating that 19 correlated features yielded optimal results,improving R2 to 88.95%.In Experiment 4,an optimal data efficiency training approach was introduced,where training data portions increased from 5%to 50%.The study found that using just 10%of the data achieved near-peak performance,with an R2 of 85.49%,emphasizing an effective trade-off between performance and computational costs.These findings highlight the potential of the hybrid approach for developing scalable Arabic AES systems,especially in low-resource environments,addressing linguistic challenges while ensuring efficient data usage. 展开更多
关键词 Automated essay scoring text-based features vector-based features embedding-based features feature selection optimal data efficiency
在线阅读 下载PDF
Regression-Based Face Pose Estimation with Deep Multi-modal Feature Loss
8
作者 Yanqiu Wu Chaoqun Hong +1 位作者 Liang Chen Zhiqiang Zeng 《国际计算机前沿大会会议论文集》 2020年第1期534-549,共16页
Image-based face pose estimation tries to estimate the facial direction with 2D images.It provides important information for many face recognition applications.However,it is a difficult task due to complex conditions ... Image-based face pose estimation tries to estimate the facial direction with 2D images.It provides important information for many face recognition applications.However,it is a difficult task due to complex conditions and appearances.Deep learning method used in this field has the disadvantage of ignoring the natural structures of human faces.To solve this problem,a framework is proposed in this paper to estimate face poses with regression,which is based on deep learning and multi-modal feature loss(M2FL).Different from current loss functions using only a single type of features,the descriptive power was improved by combining multiple image features.To achieve it,hypergraph-based manifold regularization was applied.In this way,the loss of face pose estimation was reduced.Experimental results on commonly-used benchmark datasets demonstrate the performance of M2FL. 展开更多
关键词 Face pose estimation Deep learning multi-modal features
原文传递
Detecting Anomalies in FinTech: A Graph Neural Network and Feature Selection Perspective
9
作者 Vinh Truong Hoang Nghia Dinh +3 位作者 Viet-Tuan Le Kiet Tran-Trung Bay Nguyen Van Kittikhun Meethongjan 《Computers, Materials & Continua》 2026年第1期207-246,共40页
The Financial Technology(FinTech)sector has witnessed rapid growth,resulting in increasingly complex and high-volume digital transactions.Although this expansion improves efficiency and accessibility,it also introduce... The Financial Technology(FinTech)sector has witnessed rapid growth,resulting in increasingly complex and high-volume digital transactions.Although this expansion improves efficiency and accessibility,it also introduces significant vulnerabilities,including fraud,money laundering,and market manipulation.Traditional anomaly detection techniques often fail to capture the relational and dynamic characteristics of financial data.Graph Neural Networks(GNNs),capable of modeling intricate interdependencies among entities,have emerged as a powerful framework for detecting subtle and sophisticated anomalies.However,the high-dimensionality and inherent noise of FinTech datasets demand robust feature selection strategies to improve model scalability,performance,and interpretability.This paper presents a comprehensive survey of GNN-based approaches for anomaly detection in FinTech,with an emphasis on the synergistic role of feature selection.We examine the theoretical foundations of GNNs,review state-of-the-art feature selection techniques,analyze their integration with GNNs,and categorize prevalent anomaly types in FinTech applications.In addition,we discuss practical implementation challenges,highlight representative case studies,and propose future research directions to advance the field of graph-based anomaly detection in financial systems. 展开更多
关键词 GNN SECURITY ECOMMERCE FinTech abnormal detection feature selection
在线阅读 下载PDF
Clinicopathologic features of SMARCB1/INI1-deficient pancreatic undifferentiated rhabdoid carcinoma:A case report and review of literature
10
作者 Wan-Qi Yao Xin-Yi Ma Gui-Hua Wang 《World Journal of Gastrointestinal Oncology》 2026年第1期250-262,共13页
BACKGROUND SMARCB1/INI1-deficient pancreatic undifferentiated rhabdoid carcinoma is a highly aggressive tumor,and spontaneous splenic rupture(SSR)as its presenting manifestation is rarely reported among pancreatic mal... BACKGROUND SMARCB1/INI1-deficient pancreatic undifferentiated rhabdoid carcinoma is a highly aggressive tumor,and spontaneous splenic rupture(SSR)as its presenting manifestation is rarely reported among pancreatic malignancies.CASE SUMMARY We herein report a rare case of a 59-year-old female who presented with acute left upper quadrant abdominal pain without any history of trauma.Abdominal imaging demonstrated a heterogeneous splenic lesion with hemoperitoneum,raising clinical suspicion of SSR.Emergency laparotomy revealed a pancreatic tumor invading the spleen and left kidney,with associated splenic rupture and dense adhesions,necessitating en bloc resection of the distal pancreas,spleen,and left kidney.Histopathology revealed a biphasic malignancy composed of moderately differentiated pancreatic ductal adenocarcinoma and an undifferentiated carcinoma with rhabdoid morphology and loss of SMARCB1 expression.Immunohistochemical analysis confirmed complete loss of SMARCB1/INI1 in the undifferentiated component,along with a high Ki-67 index(approximately 80%)and CD10 positivity.The ductal adenocarcinoma component retained SMARCB1/INI1 expression and was positive for CK7 and CK-pan.Transitional zones between the two tumor components suggested progressive dedifferentiation and underlying genomic instability.The patient received adjuvant chemotherapy with gemcitabine and nab-paclitaxel and maintained a satisfactory quality of life at the 6-month follow-up.CONCLUSION This study reports a rare case of SMARCB1/INI1-deficient undifferentiated rhabdoid carcinoma of the pancreas combined with ductal adenocarcinoma,presenting as SSR-an exceptionally uncommon initial manifestation of pancreatic malignancy. 展开更多
关键词 d features Switch/sucrose non-fermentable Chemotherapy Case report
暂未订购
GSLDWOA: A Feature Selection Algorithm for Intrusion Detection Systems in IIoT
11
作者 Wanwei Huang Huicong Yu +3 位作者 Jiawei Ren Kun Wang Yanbu Guo Lifeng Jin 《Computers, Materials & Continua》 2026年第1期2006-2029,共24页
Existing feature selection methods for intrusion detection systems in the Industrial Internet of Things often suffer from local optimality and high computational complexity.These challenges hinder traditional IDS from... Existing feature selection methods for intrusion detection systems in the Industrial Internet of Things often suffer from local optimality and high computational complexity.These challenges hinder traditional IDS from effectively extracting features while maintaining detection accuracy.This paper proposes an industrial Internet ofThings intrusion detection feature selection algorithm based on an improved whale optimization algorithm(GSLDWOA).The aim is to address the problems that feature selection algorithms under high-dimensional data are prone to,such as local optimality,long detection time,and reduced accuracy.First,the initial population’s diversity is increased using the Gaussian Mutation mechanism.Then,Non-linear Shrinking Factor balances global exploration and local development,avoiding premature convergence.Lastly,Variable-step Levy Flight operator and Dynamic Differential Evolution strategy are introduced to improve the algorithm’s search efficiency and convergence accuracy in highdimensional feature space.Experiments on the NSL-KDD and WUSTL-IIoT-2021 datasets demonstrate that the feature subset selected by GSLDWOA significantly improves detection performance.Compared to the traditional WOA algorithm,the detection rate and F1-score increased by 3.68%and 4.12%.On the WUSTL-IIoT-2021 dataset,accuracy,recall,and F1-score all exceed 99.9%. 展开更多
关键词 Industrial Internet of Things intrusion detection system feature selection whale optimization algorithm Gaussian mutation
在线阅读 下载PDF
Face-Pedestrian Joint Feature Modeling with Cross-Category Dynamic Matching for Occlusion-Robust Multi-Object Tracking
12
作者 Qin Hu Hongshan Kong 《Computers, Materials & Continua》 2026年第1期870-900,共31页
To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework ba... To address the issues of frequent identity switches(IDs)and degraded identification accuracy in multi object tracking(MOT)under complex occlusion scenarios,this study proposes an occlusion-robust tracking framework based on face-pedestrian joint feature modeling.By constructing a joint tracking model centered on“intra-class independent tracking+cross-category dynamic binding”,designing a multi-modal matching metric with spatio-temporal and appearance constraints,and innovatively introducing a cross-category feature mutual verification mechanism and a dual matching strategy,this work effectively resolves performance degradation in traditional single-category tracking methods caused by short-term occlusion,cross-camera tracking,and crowded environments.Experiments on the Chokepoint_Face_Pedestrian_Track test set demonstrate that in complex scenes,the proposed method improves Face-Pedestrian Matching F1 area under the curve(F1 AUC)by approximately 4 to 43 percentage points compared to several traditional methods.The joint tracking model achieves overall performance metrics of IDF1:85.1825%and MOTA:86.5956%,representing improvements of 0.91 and 0.06 percentage points,respectively,over the baseline model.Ablation studies confirm the effectiveness of key modules such as the Intersection over Area(IoA)/Intersection over Union(IoU)joint metric and dynamic threshold adjustment,validating the significant role of the cross-category identity matching mechanism in enhancing tracking stability.Our_model shows a 16.7%frame per second(FPS)drop vs.fairness of detection and re-identification in multiple object tracking(FairMOT),with its cross-category binding module adding aboute 10%overhead,yet maintains near-real-time performance for essential face-pedestrian tracking at small resolutions. 展开更多
关键词 Cross-category dynamic binding joint feature modeling face-pedestrian association multi object tracking occlusion robustness
在线阅读 下载PDF
Long-range masked autoencoder for pre-extraction of trajectory features in within-visual-range maneuver recognition
13
作者 Feilong Jiang Hutao Cui +2 位作者 Yuqing Li Minqiang Xu Rixin Wang 《Defence Technology(防务技术)》 2026年第1期301-315,共15页
In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,... In the field of intelligent air combat,real-time and accurate recognition of within-visual-range(WVR)maneuver actions serves as the foundational cornerstone for constructing autonomous decision-making systems.However,existing methods face two major challenges:traditional feature engineering suffers from insufficient effective dimensionality in the feature space due to kinematic coupling,making it difficult to distinguish essential differences between maneuvers,while end-to-end deep learning models lack controllability in implicit feature learning and fail to model high-order long-range temporal dependencies.This paper proposes a trajectory feature pre-extraction method based on a Long-range Masked Autoencoder(LMAE),incorporating three key innovations:(1)Random Fragment High-ratio Masking(RFH-Mask),which enforces the model to learn long-range temporal correlations by masking 80%of trajectory data while retaining continuous fragments;(2)Kalman Filter-Guided Objective Function(KFG-OF),integrating trajectory continuity constraints to align the feature space with kinematic principles;and(3)Two-stage Decoupled Architecture,enabling efficient and controllable feature learning through unsupervised pre-training and frozen-feature transfer.Experimental results demonstrate that LMAE significantly improves the average recognition accuracy for 20-class maneuvers compared to traditional end-to-end models,while significantly accelerating convergence speed.The contributions of this work lie in:introducing high-masking-rate autoencoders into low-informationdensity trajectory analysis,proposing a feature engineering framework with enhanced controllability and efficiency,and providing a novel technical pathway for intelligent air combat decision-making systems. 展开更多
关键词 Within-visual-range maneuver recognition Trajectory feature pre-extraction Long-range masked autoencoder Kalman filter constraints Intelligent air combat
在线阅读 下载PDF
Federated Multi-Label Feature Selection via Dual-Layer Hybrid Breeding Cooperative Particle Swarm Optimization with Manifold and Sparsity Regularization
14
作者 Songsong Zhang Huazhong Jin +5 位作者 Zhiwei Ye Jia Yang Jixin Zhang Dongfang Wu Xiao Zheng Dingfeng Song 《Computers, Materials & Continua》 2026年第1期1141-1159,共19页
Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant chal... Multi-label feature selection(MFS)is a crucial dimensionality reduction technique aimed at identifying informative features associated with multiple labels.However,traditional centralized methods face significant challenges in privacy-sensitive and distributed settings,often neglecting label dependencies and suffering from low computational efficiency.To address these issues,we introduce a novel framework,Fed-MFSDHBCPSO—federated MFS via dual-layer hybrid breeding cooperative particle swarm optimization algorithm with manifold and sparsity regularization(DHBCPSO-MSR).Leveraging the federated learning paradigm,Fed-MFSDHBCPSO allows clients to perform local feature selection(FS)using DHBCPSO-MSR.Locally selected feature subsets are encrypted with differential privacy(DP)and transmitted to a central server,where they are securely aggregated and refined through secure multi-party computation(SMPC)until global convergence is achieved.Within each client,DHBCPSO-MSR employs a dual-layer FS strategy.The inner layer constructs sample and label similarity graphs,generates Laplacian matrices to capture the manifold structure between samples and labels,and applies L2,1-norm regularization to sparsify the feature subset,yielding an optimized feature weight matrix.The outer layer uses a hybrid breeding cooperative particle swarm optimization algorithm to further refine the feature weight matrix and identify the optimal feature subset.The updated weight matrix is then fed back to the inner layer for further optimization.Comprehensive experiments on multiple real-world multi-label datasets demonstrate that Fed-MFSDHBCPSO consistently outperforms both centralized and federated baseline methods across several key evaluation metrics. 展开更多
关键词 Multi-label feature selection federated learning manifold regularization sparse constraints hybrid breeding optimization algorithm particle swarm optimizatio algorithm privacy protection
在线阅读 下载PDF
Multi-modal face parts fusion based on Gabor feature for face recognition 被引量:1
15
作者 相燕 《High Technology Letters》 EI CAS 2009年第1期70-74,共5页
A novel face recognition method, which is a fusion of muhi-modal face parts based on Gabor feature (MMP-GF), is proposed in this paper. Firstly, the bare face image detached from the normalized image was convolved w... A novel face recognition method, which is a fusion of muhi-modal face parts based on Gabor feature (MMP-GF), is proposed in this paper. Firstly, the bare face image detached from the normalized image was convolved with a family of Gabor kernels, and then according to the face structure and the key-points locations, the calculated Gabor images were divided into five parts: Gabor face, Gabor eyebrow, Gabor eye, Gabor nose and Gabor mouth. After that multi-modal Gabor features were spatially partitioned into non-overlapping regions and the averages of regions were concatenated to be a low dimension feature vector, whose dimension was further reduced by principal component analysis (PCA). In the decision level fusion, match results respectively calculated based on the five parts were combined according to linear discriminant analysis (LDA) and a normalized matching algorithm was used to improve the performance. Experiments on FERET database show that the proposed MMP-GF method achieves good robustness to the expression and age variations. 展开更多
关键词 Gabor filter multi-modal Gabor features principal component analysis (PCA) linear discriminant analysis (IDA) normalized matching algorithm
在线阅读 下载PDF
Learning Multi-Modality Features for Scene Classification of High-Resolution Remote Sensing Images 被引量:1
16
作者 Feng’an Zhao Xiongmei Zhang +2 位作者 Xiaodong Mu Zhaoxiang Yi Zhou Yang 《Journal of Computer and Communications》 2018年第11期185-193,共9页
Scene classification of high-resolution remote sensing (HRRS) image is an important research topic and has been applied broadly in many fields. Deep learning method has shown its high potential to in this domain, owin... Scene classification of high-resolution remote sensing (HRRS) image is an important research topic and has been applied broadly in many fields. Deep learning method has shown its high potential to in this domain, owing to its powerful learning ability of characterizing complex patterns. However the deep learning methods omit some global and local information of the HRRS image. To this end, in this article we show efforts to adopt explicit global and local information to provide complementary information to deep models. Specifically, we use a patch based MS-CLBP method to acquire global and local representations, and then we consider a pretrained CNN model as a feature extractor and extract deep hierarchical features from full-connection layers. After fisher vector (FV) encoding, we obtain the holistic visual representation of the scene image. We view the scene classification as a reconstruction procedure and train several class-specific stack denoising autoencoders (SDAEs) of corresponding class, i.e., one SDAE per class, and classify the test image according to the reconstruction error. Experimental results show that our combination method outperforms the state-of-the-art deep learning classification methods without employing fine-tuning. 展开更多
关键词 feature Fusion Multiple featureS SCENE Classification STACK DENOISING Autoencoder
在线阅读 下载PDF
Unsupervised multi-modal image translation based on the squeeze-and-excitation mechanism and feature attention module 被引量:1
17
作者 胡振涛 HU Chonghao +1 位作者 YANG Haoran SHUAI Weiwei 《High Technology Letters》 EI CAS 2024年第1期23-30,共8页
The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-genera... The unsupervised multi-modal image translation is an emerging domain of computer vision whose goal is to transform an image from the source domain into many diverse styles in the target domain.However,the multi-generator mechanism is employed among the advanced approaches available to model different domain mappings,which results in inefficient training of neural networks and pattern collapse,leading to inefficient generation of image diversity.To address this issue,this paper introduces a multi-modal unsupervised image translation framework that uses a generator to perform multi-modal image translation.Specifically,firstly,the domain code is introduced in this paper to explicitly control the different generation tasks.Secondly,this paper brings in the squeeze-and-excitation(SE)mechanism and feature attention(FA)module.Finally,the model integrates multiple optimization objectives to ensure efficient multi-modal translation.This paper performs qualitative and quantitative experiments on multiple non-paired benchmark image translation datasets while demonstrating the benefits of the proposed method over existing technologies.Overall,experimental results have shown that the proposed method is versatile and scalable. 展开更多
关键词 multi-modal image translation generative adversarial network(GAN) squeezeand-excitation(SE)mechanism feature attention(FA)module
在线阅读 下载PDF
A Hand Features Based Fusion Recognition Network with Enhancing Multi-Modal Correlation
18
作者 Wei Wu Yuan Zhang +2 位作者 Yunpeng Li Chuanyang Li YanHao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期537-555,共19页
Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and ... Fusing hand-based features in multi-modal biometric recognition enhances anti-spoofing capabilities.Additionally,it leverages inter-modal correlation to enhance recognition performance.Concurrently,the robustness and recognition performance of the system can be enhanced through judiciously leveraging the correlation among multimodal features.Nevertheless,two issues persist in multi-modal feature fusion recognition:Firstly,the enhancement of recognition performance in fusion recognition has not comprehensively considered the inter-modality correlations among distinct modalities.Secondly,during modal fusion,improper weight selection diminishes the salience of crucial modal features,thereby diminishing the overall recognition performance.To address these two issues,we introduce an enhanced DenseNet multimodal recognition network founded on feature-level fusion.The information from the three modalities is fused akin to RGB,and the input network augments the correlation between modes through channel correlation.Within the enhanced DenseNet network,the Efficient Channel Attention Network(ECA-Net)dynamically adjusts the weight of each channel to amplify the salience of crucial information in each modal feature.Depthwise separable convolution markedly reduces the training parameters and further enhances the feature correlation.Experimental evaluations were conducted on four multimodal databases,comprising six unimodal databases,including multispectral palmprint and palm vein databases from the Chinese Academy of Sciences.The Equal Error Rates(EER)values were 0.0149%,0.0150%,0.0099%,and 0.0050%,correspondingly.In comparison to other network methods for palmprint,palm vein,and finger vein fusion recognition,this approach substantially enhances recognition performance,rendering it suitable for high-security environments with practical applicability.The experiments in this article utilized amodest sample database comprising 200 individuals.The subsequent phase involves preparing for the extension of the method to larger databases. 展开更多
关键词 BIOMETRICS multi-modal CORRELATION deep learning feature-level fusion
在线阅读 下载PDF
Multi-Modality and Feature Fusion-Based COVID-19 Detection Through Long Short-Term Memory
19
作者 Noureen Fatima Rashid Jahangir +3 位作者 Ghulam Mujtaba Adnan Akhunzada Zahid Hussain Shaikh Faiza Qureshi 《Computers, Materials & Continua》 SCIE EI 2022年第9期4357-4374,共18页
The Coronavirus Disease 2019(COVID-19)pandemic poses the worldwide challenges surpassing the boundaries of country,religion,race,and economy.The current benchmark method for the detection of COVID-19 is the reverse tr... The Coronavirus Disease 2019(COVID-19)pandemic poses the worldwide challenges surpassing the boundaries of country,religion,race,and economy.The current benchmark method for the detection of COVID-19 is the reverse transcription polymerase chain reaction(RT-PCR)testing.Nevertheless,this testing method is accurate enough for the diagnosis of COVID-19.However,it is time-consuming,expensive,expert-dependent,and violates social distancing.In this paper,this research proposed an effective multimodality-based and feature fusion-based(MMFF)COVID-19 detection technique through deep neural networks.In multi-modality,we have utilized the cough samples,breathe samples and sound samples of healthy as well as COVID-19 patients from publicly available COSWARA dataset.Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.Several useful features were extracted from the aforementioned modalities that were then fed as an input to long short-term memory recurrent neural network algorithms for the classification purpose.Extensive set of experimental analyses were performed to evaluate the performance of our proposed approach.The experimental results showed that our proposed approach outperformed compared to four baseline approaches published recently.We believe that our proposed technique will assists potential users to diagnose the COVID-19 without the intervention of any expert in minimum amount of time. 展开更多
关键词 Covid-19 detection long short-term memory feature fusion deep learning audio classification
暂未订购
Construction and evaluation of a predictive model for the degree of coronary artery occlusion based on adaptive weighted multi-modal fusion of traditional Chinese and western medicine data 被引量:2
20
作者 Jiyu ZHANG Jiatuo XU +1 位作者 Liping TU Hongyuan FU 《Digital Chinese Medicine》 2025年第2期163-173,共11页
Objective To develop a non-invasive predictive model for coronary artery stenosis severity based on adaptive multi-modal integration of traditional Chinese and western medicine data.Methods Clinical indicators,echocar... Objective To develop a non-invasive predictive model for coronary artery stenosis severity based on adaptive multi-modal integration of traditional Chinese and western medicine data.Methods Clinical indicators,echocardiographic data,traditional Chinese medicine(TCM)tongue manifestations,and facial features were collected from patients who underwent coro-nary computed tomography angiography(CTA)in the Cardiac Care Unit(CCU)of Shanghai Tenth People's Hospital between May 1,2023 and May 1,2024.An adaptive weighted multi-modal data fusion(AWMDF)model based on deep learning was constructed to predict the severity of coronary artery stenosis.The model was evaluated using metrics including accura-cy,precision,recall,F1 score,and the area under the receiver operating characteristic(ROC)curve(AUC).Further performance assessment was conducted through comparisons with six ensemble machine learning methods,data ablation,model component ablation,and various decision-level fusion strategies.Results A total of 158 patients were included in the study.The AWMDF model achieved ex-cellent predictive performance(AUC=0.973,accuracy=0.937,precision=0.937,recall=0.929,and F1 score=0.933).Compared with model ablation,data ablation experiments,and various traditional machine learning models,the AWMDF model demonstrated superior per-formance.Moreover,the adaptive weighting strategy outperformed alternative approaches,including simple weighting,averaging,voting,and fixed-weight schemes.Conclusion The AWMDF model demonstrates potential clinical value in the non-invasive prediction of coronary artery disease and could serve as a tool for clinical decision support. 展开更多
关键词 Coronary artery disease Deep learning multi-modal Clinical prediction Traditional Chinese medicine diagnosis
暂未订购
上一页 1 2 250 下一页 到第
使用帮助 返回顶部