With the rapid expansion of social media,analyzing emotions and their causes in texts has gained significant importance.Emotion-cause pair extraction enables the identification of causal relationships between emotions...With the rapid expansion of social media,analyzing emotions and their causes in texts has gained significant importance.Emotion-cause pair extraction enables the identification of causal relationships between emotions and their triggers within a text,facilitating a deeper understanding of expressed sentiments and their underlying reasons.This comprehension is crucial for making informed strategic decisions in various business and societal contexts.However,recent research approaches employing multi-task learning frameworks for modeling often face challenges such as the inability to simultaneouslymodel extracted features and their interactions,or inconsistencies in label prediction between emotion-cause pair extraction and independent assistant tasks like emotion and cause extraction.To address these issues,this study proposes an emotion-cause pair extraction methodology that incorporates joint feature encoding and task alignment mechanisms.The model consists of two primary components:First,joint feature encoding simultaneously generates features for emotion-cause pairs and clauses,enhancing feature interactions between emotion clauses,cause clauses,and emotion-cause pairs.Second,the task alignment technique is applied to reduce the labeling distance between emotion-cause pair extraction and the two assistant tasks,capturing deep semantic information interactions among tasks.The proposed method is evaluated on a Chinese benchmark corpus using 10-fold cross-validation,assessing key performance metrics such as precision,recall,and F1 score.Experimental results demonstrate that the model achieves an F1 score of 76.05%,surpassing the state-of-the-art by 1.03%.The proposed model exhibits significant improvements in emotion-cause pair extraction(ECPE)and cause extraction(CE)compared to existing methods,validating its effectiveness.This research introduces a novel approach based on joint feature encoding and task alignment mechanisms,contributing to advancements in emotion-cause pair extraction.However,the study’s limitation lies in the data sources,potentially restricting the generalizability of the findings.展开更多
Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely us...Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely used in pulmonary disease diagnosis,such as pneumonia and tuberculosis.However,traditional feature fusion methods often suffer from feature disparity,information loss,redundancy,and increased complexity,hindering the further extension of DL algorithms.To solve this problem,we propose a Graph-Convolution Fusion Network with Self-Supervised Feature Alignment(Self-FAGCFN)to address the limitations of traditional feature fusion methods in deep learning-based medical image classification for respiratory diseases such as pneumonia and tuberculosis.The network integrates Convolutional Neural Networks(CNNs)for robust feature extraction from two-dimensional grid structures and Graph Convolutional Networks(GCNs)within a Graph Neural Network branch to capture features based on graph structure,focusing on significant node representations.Additionally,an Attention-Embedding Ensemble Block is included to capture critical features from GCN outputs.To ensure effective feature alignment between pre-and post-fusion stages,we introduce a feature alignment loss that minimizes disparities.Moreover,to address the limitations of proposed methods,such as inappropriate centroid discrepancies during feature alignment and class imbalance in the dataset,we develop a Feature-Centroid Fusion(FCF)strategy and a Multi-Level Feature-Centroid Update(MLFCU)algorithm,respectively.Extensive experiments on public datasets LungVision and Chest-Xray demonstrate that the Self-FAGCFN model significantly outperforms existing methods in diagnosing pneumonia and tuberculosis,highlighting its potential for practical medical applications.展开更多
In the realm of data privacy protection,federated learning aims to collaboratively train a global model.However,heterogeneous data between clients presents challenges,often resulting in slow convergence and inadequate...In the realm of data privacy protection,federated learning aims to collaboratively train a global model.However,heterogeneous data between clients presents challenges,often resulting in slow convergence and inadequate accuracy of the global model.Utilizing shared feature representations alongside customized classifiers for individual clients emerges as a promising personalized solution.Nonetheless,previous research has frequently neglected the integration of global knowledge into local representation learning and the synergy between global and local classifiers,thereby limiting model performance.To tackle these issues,this study proposes a hierarchical optimization method for federated learning with feature alignment and the fusion of classification decisions(FedFCD).FedFCD regularizes the relationship between global and local feature representations to achieve alignment and incorporates decision information from the global classifier,facilitating the late fusion of decision outputs from both global and local classifiers.Additionally,FedFCD employs a hierarchical optimization strategy to flexibly optimize model parameters.Through experiments on the Fashion-MNIST,CIFAR-10 and CIFAR-100 datasets,we demonstrate the effectiveness and superiority of FedFCD.For instance,on the CIFAR-100 dataset,FedFCD exhibited a significant improvement in average test accuracy by 6.83%compared to four outstanding personalized federated learning approaches.Furthermore,extended experiments confirm the robustness of FedFCD across various hyperparameter values.展开更多
In the textile industry,the presence of defects on the surface of fabric is an essential factor in determining fabric quality.Therefore,identifying fabric defects forms a crucial part of the fabric production process....In the textile industry,the presence of defects on the surface of fabric is an essential factor in determining fabric quality.Therefore,identifying fabric defects forms a crucial part of the fabric production process.Traditional fabric defect detection algorithms can only detect specific materials and specific fabric defect types;in addition,their detection efficiency is low,and their detection results are relatively poor.Deep learning-based methods have many advantages in the field of fabric defect detection,however,such methods are less effective in identifying multiscale fabric defects and defects with complex shapes.Therefore,we propose an effective algorithm,namely multilayer feature extraction combined with deformable convolution(MFDC),for fabric defect detection.In MFDC,multi-layer feature extraction is used to fuse the underlying location features with high-level classification features through a horizontally connected top-down architecture to improve the detection of multi-scale fabric defects.On this basis,a deformable convolution is added to solve the problem of the algorithm’s weak detection ability of irregularly shaped fabric defects.In this approach,Roi Align and Cascade-RCNN are integrated to enhance the adaptability of the algorithm in materials with complex patterned backgrounds.The experimental results show that the MFDC algorithm can achieve good detection results for both multi-scale fabric defects and defects with complex shapes,at the expense of a small increase in detection time.展开更多
Audio-visual scene classification(AVSC)poses a formidable challenge owing to the intricate spatial-temporal relationships exhibited by audio-visual signals,coupled with the complex spatial patterns of objects and text...Audio-visual scene classification(AVSC)poses a formidable challenge owing to the intricate spatial-temporal relationships exhibited by audio-visual signals,coupled with the complex spatial patterns of objects and textures found in visual images.The focus of recent studies has predominantly revolved around extracting features from diverse neural network structures,inadvertently neglecting the acquisition of semantically meaningful regions and crucial components within audio-visual data.The authors present a feature pyramid attention network(FPANet)for audio-visual scene understanding,which extracts semantically significant characteristics from audio-visual data.The authors’approach builds multi-scale hierarchical features of sound spectrograms and visual images using a feature pyramid representation and localises the semantically relevant regions with a feature pyramid attention module(FPAM).A dimension alignment(DA)strategy is employed to align feature maps from multiple layers,a pyramid spatial attention(PSA)to spatially locate essential regions,and a pyramid channel attention(PCA)to pinpoint significant temporal frames.Experiments on visual scene classification(VSC),audio scene classification(ASC),and AVSC tasks demonstrate that FPANet achieves performance on par with state-of-the-art(SOTA)approaches,with a 95.9 F1-score on the ADVANCE dataset and a relative improvement of 28.8%.Visualisation results show that FPANet can prioritise semantically meaningful areas in audio-visual signals.展开更多
To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities...To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model.展开更多
Video classification is an important task in video understanding and plays a pivotal role in intelligent monitoring of information content.Most existing methods do not consider the multimodal nature of the video,and t...Video classification is an important task in video understanding and plays a pivotal role in intelligent monitoring of information content.Most existing methods do not consider the multimodal nature of the video,and the modality fusion approach tends to be too simple,often neglecting modality alignment before fusion.This research introduces a novel dual stream multimodal alignment and fusion network named DMAFNet for classifying short videos.The network uses two unimodal encoder modules to extract features within modalities and exploits a multimodal encoder module to learn interaction between modalities.To solve the modality alignment problem,contrastive learning is introduced between two unimodal encoder modules.Additionally,masked language modeling(MLM)and video text matching(VTM)auxiliary tasks are introduced to improve the interaction between video frames and text modalities through backpropagation of loss functions.Diverse experiments prove the efficiency of DMAFNet in multimodal video classification tasks.Compared with other two mainstream baselines,DMAFNet achieves the best results on the 2022 WeChat Big Data Challenge dataset.展开更多
Considering that real communication signals corrupted by noise are generally nonstationary, and timefrequency distributions are especially suitable for the analysis of nonstationary signals, time-frequency distributio...Considering that real communication signals corrupted by noise are generally nonstationary, and timefrequency distributions are especially suitable for the analysis of nonstationary signals, time-frequency distributions are introduced for the modulation classification of communication signals: The extracted time-frequency features have good classification information, and they are insensitive to signal to noise ratio (SNR) variation. According to good classification by the correct rate of a neural network classifier, a multilayer perceptron (MLP) classifier with better generalization, as well as, addition of time-frequency features set for classifying six different modulation types has been proposed. Computer simulations show that the MLP classifier outperforms the decision-theoretic classifier at low SNRs, and the classification experiments for real MPSK signals verify engineering significance of the MLP classifier.展开更多
The multi-layer ceramic capacitor (MLCC) alignment system aims at the inter-process automation between the first and the second plastic processes.As a result of testing performance verification of MLCC alignment syste...The multi-layer ceramic capacitor (MLCC) alignment system aims at the inter-process automation between the first and the second plastic processes.As a result of testing performance verification of MLCC alignment system,the average alignment rates are 95% for 3216 chip,88.5% for 2012 chip and 90.8% for 3818 chip.The MLCC alignment system can be accepted for practical use because the average manual alignment is just 80%.In other words,the developed MLCC alignment system has been upgraded to a great extent,compared with manual alignment.Based on the successfully developed MLCC alignment system,the optimal transfer conditions have been explored by using RSM.The simulations using ADAMS has been performed according to the cube model of CCD.By using MiniTAB,the model of response surface has been established based on the simulation results.The optimal conditions resulted from the response optimization tool of MiniTAB has been verified by being assigned to the prototype of MLCC alignment system.展开更多
Fetal health care is vital in ensuring the health of pregnant women and the fetus.Regular check-ups need to be taken by the mother to determine the status of the fetus’growth and identify any potential problems.To kn...Fetal health care is vital in ensuring the health of pregnant women and the fetus.Regular check-ups need to be taken by the mother to determine the status of the fetus’growth and identify any potential problems.To know the status of the fetus,doctors monitor blood reports,Ultrasounds,cardiotocography(CTG)data,etc.Still,in this research,we have considered CTG data,which provides information on heart rate and uterine contractions during pregnancy.Several researchers have proposed various methods for classifying the status of fetus growth.Manual processing of CTG data is time-consuming and unreliable.So,automated tools should be used to classify fetal health.This study proposes a novel neural network-based architecture,the Dynamic Multi-Layer Perceptron model,evaluated from a single layer to several layers to classify fetal health.Various strategies were applied,including pre-processing data using techniques like Balancing,Scaling,Normalization hyperparameter tuning,batch normalization,early stopping,etc.,to enhance the model’s performance.A comparative analysis of the proposed method is done against the traditional machine learning models to showcase its accuracy(97%).An ablation study without any pre-processing techniques is also illustrated.This study easily provides valuable interpretations for healthcare professionals in the decision-making process.展开更多
With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve suffi...With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.展开更多
The effectiveness of recommendation systems heavily relies on accurately predicting user ratings for items based on user preferences and item attributes derived from ratings and reviews.However,the increasing presence...The effectiveness of recommendation systems heavily relies on accurately predicting user ratings for items based on user preferences and item attributes derived from ratings and reviews.However,the increasing presence of fake user data in these ratings and reviews poses significant challenges,hindering feature extraction,diminishing rating prediction accuracy,and eroding user trust in the system.To tackle this issue,we propose a robust rating prediction model for recommendation systems that integrates fake user detection and multi-layer feature fusion.Our model utilizes a GraphSAGE-based submodel to filter out fake user data from rating data and review texts.To strengthen fake user detection,we enhance GraphSAGE by selecting aggregation neighbors based on the collusion fraud degree among users,and employ an attention mechanism to weigh the contribution of each neighbor during representation aggregation.Furthermore,we introduce a multi-layer feature fusion submodel to integrate diverse features extracted from the filtered ratings and reviews.For deep feature extraction from review texts,we implement a temporal attention mechanism to analyze the relevance of reviews over time.For shallow feature extraction from rating data,we incorporate trust evaluation mechanism and cloud model to assess the influence of trusted neighbors’ratings.In our evaluation,we compare our model against six baseline models for fake user detection and four rating prediction models across five datasets.The results demonstrate that our model exhibits significant performance advantages in both fake user detection and rating prediction.展开更多
The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension.Currently,nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddi...The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension.Currently,nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddings,such as manifold learning.However,these methods are all based on manual intervention,which have some shortages in stability,and suppressing the disturbance noise.To extract features automatically,a manifold learning method with self-organization mapping is introduced for the first time.Under the non-uniform sample distribution reconstructed by the phase space,the expectation maximization(EM) iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention.After that,the local tangent space alignment(LTSA) algorithm is adopted to compress the high-dimensional phase space into a more truthful low-dimensional representation.Finally,the signal is reconstructed by the kernel regression.Several typical states include the Lorenz system,engine fault with piston pin defect,and bearing fault with outer-race defect are analyzed.Compared with the LTSA and continuous wavelet transform,the results show that the background noise can be fully restrained and the entire periodic repetition of impact components is well separated and identified.A new way to automatically and precisely extract the impulsive components from mechanical signals is proposed.展开更多
In this paper,we study the problem of domain adaptation,which is a crucial ingredient in transfer learning with two domains,that is,the source domain with labeled data and the target domain with none or few labels.Dom...In this paper,we study the problem of domain adaptation,which is a crucial ingredient in transfer learning with two domains,that is,the source domain with labeled data and the target domain with none or few labels.Domain adaptation aims to extract knowledge from the source domain to improve the performance of the learning task in the target domain.A popular approach to handle this problem is via adversarial training,which is explained by the H△H-distance theory.However,traditional adversarial network architectures just align the marginal feature distribution in the feature space.The alignment of class condition distribution is not guaranteed.Therefore,we proposed a novel method based on pseudo labels and the cluster assumption to avoid the incorrect class alignment in the feature space.The experiments demonstrate that our framework improves the accuracy on typical transfer learning tasks.展开更多
One aspect of cybersecurity,incorporates the study of Portable Executables(PE)files maleficence.Artificial Intelligence(AI)can be employed in such studies,since AI has the ability to discriminate benign from malicious...One aspect of cybersecurity,incorporates the study of Portable Executables(PE)files maleficence.Artificial Intelligence(AI)can be employed in such studies,since AI has the ability to discriminate benign from malicious files.In this study,an exclusive set of 29 features was collected from trusted implementations,this set was used as a baseline to analyze the presented work in this research.A Decision Tree(DT)and Neural Network Multi-Layer Perceptron(NN-MLPC)algorithms were utilized during this work.Both algorithms were chosen after testing a few diverse procedures.This work implements a method of subgrouping features to answer questions such as,which feature has a positive impact on accuracy when added?Is it possible to determine a reliable feature set to distinguish a malicious PE file from a benign one?when combining features,would it have any effect on malware detection accuracy in a PE file?Results obtained using the proposed method were improved and carried few observations.Generally,the obtained results had practical and numerical parts,for the practical part,the number of features and which features included are the main factors impacting the calculated accuracy,also,the combination of features is as crucial in these calculations.Numerical results included,finding accuracies with enhanced values,for example,NN_MLPC attained 0.979 and 0.98;for DT an accuracy of 0.9825 and 0.986 was attained.展开更多
Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained ...Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been achieved by considering 30 sample images per class whereas the proposed system is capable of achieving improved performance by considering all sample images per class. Our research shows that for feature extraction based on CNNs, multiple layers should be considered and then the best layer can be selected that maximizes the recognition performance.展开更多
文摘With the rapid expansion of social media,analyzing emotions and their causes in texts has gained significant importance.Emotion-cause pair extraction enables the identification of causal relationships between emotions and their triggers within a text,facilitating a deeper understanding of expressed sentiments and their underlying reasons.This comprehension is crucial for making informed strategic decisions in various business and societal contexts.However,recent research approaches employing multi-task learning frameworks for modeling often face challenges such as the inability to simultaneouslymodel extracted features and their interactions,or inconsistencies in label prediction between emotion-cause pair extraction and independent assistant tasks like emotion and cause extraction.To address these issues,this study proposes an emotion-cause pair extraction methodology that incorporates joint feature encoding and task alignment mechanisms.The model consists of two primary components:First,joint feature encoding simultaneously generates features for emotion-cause pairs and clauses,enhancing feature interactions between emotion clauses,cause clauses,and emotion-cause pairs.Second,the task alignment technique is applied to reduce the labeling distance between emotion-cause pair extraction and the two assistant tasks,capturing deep semantic information interactions among tasks.The proposed method is evaluated on a Chinese benchmark corpus using 10-fold cross-validation,assessing key performance metrics such as precision,recall,and F1 score.Experimental results demonstrate that the model achieves an F1 score of 76.05%,surpassing the state-of-the-art by 1.03%.The proposed model exhibits significant improvements in emotion-cause pair extraction(ECPE)and cause extraction(CE)compared to existing methods,validating its effectiveness.This research introduces a novel approach based on joint feature encoding and task alignment mechanisms,contributing to advancements in emotion-cause pair extraction.However,the study’s limitation lies in the data sources,potentially restricting the generalizability of the findings.
基金supported by the National Natural Science Foundation of China(62276092,62303167)the Postdoctoral Fellowship Program(Grade C)of China Postdoctoral Science Foundation(GZC20230707)+3 种基金the Key Science and Technology Program of Henan Province,China(242102211051,242102211042,212102310084)Key Scientiffc Research Projects of Colleges and Universities in Henan Province,China(25A520009)the China Postdoctoral Science Foundation(2024M760808)the Henan Province medical science and technology research plan joint construction project(LHGJ2024069).
文摘Feature fusion is an important technique in medical image classification that can improve diagnostic accuracy by integrating complementary information from multiple sources.Recently,Deep Learning(DL)has been widely used in pulmonary disease diagnosis,such as pneumonia and tuberculosis.However,traditional feature fusion methods often suffer from feature disparity,information loss,redundancy,and increased complexity,hindering the further extension of DL algorithms.To solve this problem,we propose a Graph-Convolution Fusion Network with Self-Supervised Feature Alignment(Self-FAGCFN)to address the limitations of traditional feature fusion methods in deep learning-based medical image classification for respiratory diseases such as pneumonia and tuberculosis.The network integrates Convolutional Neural Networks(CNNs)for robust feature extraction from two-dimensional grid structures and Graph Convolutional Networks(GCNs)within a Graph Neural Network branch to capture features based on graph structure,focusing on significant node representations.Additionally,an Attention-Embedding Ensemble Block is included to capture critical features from GCN outputs.To ensure effective feature alignment between pre-and post-fusion stages,we introduce a feature alignment loss that minimizes disparities.Moreover,to address the limitations of proposed methods,such as inappropriate centroid discrepancies during feature alignment and class imbalance in the dataset,we develop a Feature-Centroid Fusion(FCF)strategy and a Multi-Level Feature-Centroid Update(MLFCU)algorithm,respectively.Extensive experiments on public datasets LungVision and Chest-Xray demonstrate that the Self-FAGCFN model significantly outperforms existing methods in diagnosing pneumonia and tuberculosis,highlighting its potential for practical medical applications.
基金the National Natural Science Foundation of China(Grant No.62062001)Ningxia Youth Top Talent Project(2021).
文摘In the realm of data privacy protection,federated learning aims to collaboratively train a global model.However,heterogeneous data between clients presents challenges,often resulting in slow convergence and inadequate accuracy of the global model.Utilizing shared feature representations alongside customized classifiers for individual clients emerges as a promising personalized solution.Nonetheless,previous research has frequently neglected the integration of global knowledge into local representation learning and the synergy between global and local classifiers,thereby limiting model performance.To tackle these issues,this study proposes a hierarchical optimization method for federated learning with feature alignment and the fusion of classification decisions(FedFCD).FedFCD regularizes the relationship between global and local feature representations to achieve alignment and incorporates decision information from the global classifier,facilitating the late fusion of decision outputs from both global and local classifiers.Additionally,FedFCD employs a hierarchical optimization strategy to flexibly optimize model parameters.Through experiments on the Fashion-MNIST,CIFAR-10 and CIFAR-100 datasets,we demonstrate the effectiveness and superiority of FedFCD.For instance,on the CIFAR-100 dataset,FedFCD exhibited a significant improvement in average test accuracy by 6.83%compared to four outstanding personalized federated learning approaches.Furthermore,extended experiments confirm the robustness of FedFCD across various hyperparameter values.
基金supported in part by the National Science Foundation of China under Grant 62001236in part by the Natural Science Foundation of the Jiangsu Higher Education Institutions of China under Grant 20KJA520003.
文摘In the textile industry,the presence of defects on the surface of fabric is an essential factor in determining fabric quality.Therefore,identifying fabric defects forms a crucial part of the fabric production process.Traditional fabric defect detection algorithms can only detect specific materials and specific fabric defect types;in addition,their detection efficiency is low,and their detection results are relatively poor.Deep learning-based methods have many advantages in the field of fabric defect detection,however,such methods are less effective in identifying multiscale fabric defects and defects with complex shapes.Therefore,we propose an effective algorithm,namely multilayer feature extraction combined with deformable convolution(MFDC),for fabric defect detection.In MFDC,multi-layer feature extraction is used to fuse the underlying location features with high-level classification features through a horizontally connected top-down architecture to improve the detection of multi-scale fabric defects.On this basis,a deformable convolution is added to solve the problem of the algorithm’s weak detection ability of irregularly shaped fabric defects.In this approach,Roi Align and Cascade-RCNN are integrated to enhance the adaptability of the algorithm in materials with complex patterned backgrounds.The experimental results show that the MFDC algorithm can achieve good detection results for both multi-scale fabric defects and defects with complex shapes,at the expense of a small increase in detection time.
基金Shenzhen Institute of Artificial Intelligence and Robotics for Society,Grant/Award Number:AC01202201003-02GuangDong Basic and Applied Basic Research Foundation,Grant/Award Number:2024A1515010252Longgang District Shenzhen's“Ten Action Plan”for Supporting Innovation Projects,Grant/Award Number:LGKCSDPT2024002。
文摘Audio-visual scene classification(AVSC)poses a formidable challenge owing to the intricate spatial-temporal relationships exhibited by audio-visual signals,coupled with the complex spatial patterns of objects and textures found in visual images.The focus of recent studies has predominantly revolved around extracting features from diverse neural network structures,inadvertently neglecting the acquisition of semantically meaningful regions and crucial components within audio-visual data.The authors present a feature pyramid attention network(FPANet)for audio-visual scene understanding,which extracts semantically significant characteristics from audio-visual data.The authors’approach builds multi-scale hierarchical features of sound spectrograms and visual images using a feature pyramid representation and localises the semantically relevant regions with a feature pyramid attention module(FPAM).A dimension alignment(DA)strategy is employed to align feature maps from multiple layers,a pyramid spatial attention(PSA)to spatially locate essential regions,and a pyramid channel attention(PCA)to pinpoint significant temporal frames.Experiments on visual scene classification(VSC),audio scene classification(ASC),and AVSC tasks demonstrate that FPANet achieves performance on par with state-of-the-art(SOTA)approaches,with a 95.9 F1-score on the ADVANCE dataset and a relative improvement of 28.8%.Visualisation results show that FPANet can prioritise semantically meaningful areas in audio-visual signals.
基金partially supported by the National Natural Science Foundation of China under Grants 62471493 and 62402257(for conceptualization and investigation)partially supported by the Natural Science Foundation of Shandong Province,China under Grants ZR2023LZH017,ZR2024MF066,and 2023QF025(for formal analysis and validation)+1 种基金partially supported by the Open Foundation of Key Laboratory of Computing Power Network and Information Security,Ministry of Education,Qilu University of Technology(Shandong Academy of Sciences)under Grant 2023ZD010(for methodology and model design)partially supported by the Russian Science Foundation(RSF)Project under Grant 22-71-10095-P(for validation and results verification).
文摘To address the challenge of missing modal information in entity alignment and to mitigate information loss or bias arising frommodal heterogeneity during fusion,while also capturing shared information acrossmodalities,this paper proposes a Multi-modal Pre-synergistic Entity Alignmentmodel based on Cross-modalMutual Information Strategy Optimization(MPSEA).The model first employs independent encoders to process multi-modal features,including text,images,and numerical values.Next,a multi-modal pre-synergistic fusion mechanism integrates graph structural and visual modal features into the textual modality as preparatory information.This pre-fusion strategy enables unified perception of heterogeneous modalities at the model’s initial stage,reducing discrepancies during the fusion process.Finally,using cross-modal deep perception reinforcement learning,the model achieves adaptive multilevel feature fusion between modalities,supporting learningmore effective alignment strategies.Extensive experiments on multiple public datasets show that the MPSEA method achieves gains of up to 7% in Hits@1 and 8.2% in MRR on the FBDB15K dataset,and up to 9.1% in Hits@1 and 7.7% in MRR on the FBYG15K dataset,compared to existing state-of-the-art methods.These results confirm the effectiveness of the proposed model.
基金Fundamental Research Funds for the Central Universities,China(No.2232021A-10)National Natural Science Foundation of China(No.61903078)+1 种基金Shanghai Sailing Program,China(No.22YF1401300)Natural Science Foundation of Shanghai,China(No.20ZR1400400)。
文摘Video classification is an important task in video understanding and plays a pivotal role in intelligent monitoring of information content.Most existing methods do not consider the multimodal nature of the video,and the modality fusion approach tends to be too simple,often neglecting modality alignment before fusion.This research introduces a novel dual stream multimodal alignment and fusion network named DMAFNet for classifying short videos.The network uses two unimodal encoder modules to extract features within modalities and exploits a multimodal encoder module to learn interaction between modalities.To solve the modality alignment problem,contrastive learning is introduced between two unimodal encoder modules.Additionally,masked language modeling(MLM)and video text matching(VTM)auxiliary tasks are introduced to improve the interaction between video frames and text modalities through backpropagation of loss functions.Diverse experiments prove the efficiency of DMAFNet in multimodal video classification tasks.Compared with other two mainstream baselines,DMAFNet achieves the best results on the 2022 WeChat Big Data Challenge dataset.
文摘Considering that real communication signals corrupted by noise are generally nonstationary, and timefrequency distributions are especially suitable for the analysis of nonstationary signals, time-frequency distributions are introduced for the modulation classification of communication signals: The extracted time-frequency features have good classification information, and they are insensitive to signal to noise ratio (SNR) variation. According to good classification by the correct rate of a neural network classifier, a multilayer perceptron (MLP) classifier with better generalization, as well as, addition of time-frequency features set for classifying six different modulation types has been proposed. Computer simulations show that the MLP classifier outperforms the decision-theoretic classifier at low SNRs, and the classification experiments for real MPSK signals verify engineering significance of the MLP classifier.
基金supported by the Second Stage of Brain Korea 21 Projectssupported (in part) by the Solomon Mechanics Inc
文摘The multi-layer ceramic capacitor (MLCC) alignment system aims at the inter-process automation between the first and the second plastic processes.As a result of testing performance verification of MLCC alignment system,the average alignment rates are 95% for 3216 chip,88.5% for 2012 chip and 90.8% for 3818 chip.The MLCC alignment system can be accepted for practical use because the average manual alignment is just 80%.In other words,the developed MLCC alignment system has been upgraded to a great extent,compared with manual alignment.Based on the successfully developed MLCC alignment system,the optimal transfer conditions have been explored by using RSM.The simulations using ADAMS has been performed according to the cube model of CCD.By using MiniTAB,the model of response surface has been established based on the simulation results.The optimal conditions resulted from the response optimization tool of MiniTAB has been verified by being assigned to the prototype of MLCC alignment system.
基金This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(NRF-2023R1A2C1005950)Jana Shafi is supported via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2024/R/1445).
文摘Fetal health care is vital in ensuring the health of pregnant women and the fetus.Regular check-ups need to be taken by the mother to determine the status of the fetus’growth and identify any potential problems.To know the status of the fetus,doctors monitor blood reports,Ultrasounds,cardiotocography(CTG)data,etc.Still,in this research,we have considered CTG data,which provides information on heart rate and uterine contractions during pregnancy.Several researchers have proposed various methods for classifying the status of fetus growth.Manual processing of CTG data is time-consuming and unreliable.So,automated tools should be used to classify fetal health.This study proposes a novel neural network-based architecture,the Dynamic Multi-Layer Perceptron model,evaluated from a single layer to several layers to classify fetal health.Various strategies were applied,including pre-processing data using techniques like Balancing,Scaling,Normalization hyperparameter tuning,batch normalization,early stopping,etc.,to enhance the model’s performance.A comparative analysis of the proposed method is done against the traditional machine learning models to showcase its accuracy(97%).An ablation study without any pre-processing techniques is also illustrated.This study easily provides valuable interpretations for healthcare professionals in the decision-making process.
文摘With the popularisation of intelligent power,power devices have different shapes,numbers and specifications.This means that the power data has distributional variability,the model learning process cannot achieve sufficient extraction of data features,which seriously affects the accuracy and performance of anomaly detection.Therefore,this paper proposes a deep learning-based anomaly detection model for power data,which integrates a data alignment enhancement technique based on random sampling and an adaptive feature fusion method leveraging dimension reduction.Aiming at the distribution variability of power data,this paper developed a sliding window-based data adjustment method for this model,which solves the problem of high-dimensional feature noise and low-dimensional missing data.To address the problem of insufficient feature fusion,an adaptive feature fusion method based on feature dimension reduction and dictionary learning is proposed to improve the anomaly data detection accuracy of the model.In order to verify the effectiveness of the proposed method,we conducted effectiveness comparisons through elimination experiments.The experimental results show that compared with the traditional anomaly detection methods,the method proposed in this paper not only has an advantage in model accuracy,but also reduces the amount of parameter calculation of the model in the process of feature matching and improves the detection speed.
基金funded by the National Natural Science Foundation of China(No.72072091)the Natural Science Foundation of Colleges and Universities of Jiangsu Province(Nos.21KJA520002 and 22KJA520005)+1 种基金the Ministry of Education Supply and Demand Matching Employment and Education Project(No.2023121801795)the Postgraduate Research&Practice Innovation Program of Jiangsu Province(Nos.KYCX23_2345 and SJCX24_1164).
文摘The effectiveness of recommendation systems heavily relies on accurately predicting user ratings for items based on user preferences and item attributes derived from ratings and reviews.However,the increasing presence of fake user data in these ratings and reviews poses significant challenges,hindering feature extraction,diminishing rating prediction accuracy,and eroding user trust in the system.To tackle this issue,we propose a robust rating prediction model for recommendation systems that integrates fake user detection and multi-layer feature fusion.Our model utilizes a GraphSAGE-based submodel to filter out fake user data from rating data and review texts.To strengthen fake user detection,we enhance GraphSAGE by selecting aggregation neighbors based on the collusion fraud degree among users,and employ an attention mechanism to weigh the contribution of each neighbor during representation aggregation.Furthermore,we introduce a multi-layer feature fusion submodel to integrate diverse features extracted from the filtered ratings and reviews.For deep feature extraction from review texts,we implement a temporal attention mechanism to analyze the relevance of reviews over time.For shallow feature extraction from rating data,we incorporate trust evaluation mechanism and cloud model to assess the influence of trusted neighbors’ratings.In our evaluation,we compare our model against six baseline models for fake user detection and four rating prediction models across five datasets.The results demonstrate that our model exhibits significant performance advantages in both fake user detection and rating prediction.
基金supported by National Natural Science Foundation of China(Grant No.51075323)
文摘The feature space extracted from vibration signals with various faults is often nonlinear and of high dimension.Currently,nonlinear dimensionality reduction methods are available for extracting low-dimensional embeddings,such as manifold learning.However,these methods are all based on manual intervention,which have some shortages in stability,and suppressing the disturbance noise.To extract features automatically,a manifold learning method with self-organization mapping is introduced for the first time.Under the non-uniform sample distribution reconstructed by the phase space,the expectation maximization(EM) iteration algorithm is used to divide the local neighborhoods adaptively without manual intervention.After that,the local tangent space alignment(LTSA) algorithm is adopted to compress the high-dimensional phase space into a more truthful low-dimensional representation.Finally,the signal is reconstructed by the kernel regression.Several typical states include the Lorenz system,engine fault with piston pin defect,and bearing fault with outer-race defect are analyzed.Compared with the LTSA and continuous wavelet transform,the results show that the background noise can be fully restrained and the entire periodic repetition of impact components is well separated and identified.A new way to automatically and precisely extract the impulsive components from mechanical signals is proposed.
基金supported by the National Key Research and Development Program of China(No.2016YFB0901902)the National Natural Science Foundation of China(No.61733018).
文摘In this paper,we study the problem of domain adaptation,which is a crucial ingredient in transfer learning with two domains,that is,the source domain with labeled data and the target domain with none or few labels.Domain adaptation aims to extract knowledge from the source domain to improve the performance of the learning task in the target domain.A popular approach to handle this problem is via adversarial training,which is explained by the H△H-distance theory.However,traditional adversarial network architectures just align the marginal feature distribution in the feature space.The alignment of class condition distribution is not guaranteed.Therefore,we proposed a novel method based on pseudo labels and the cluster assumption to avoid the incorrect class alignment in the feature space.The experiments demonstrate that our framework improves the accuracy on typical transfer learning tasks.
文摘One aspect of cybersecurity,incorporates the study of Portable Executables(PE)files maleficence.Artificial Intelligence(AI)can be employed in such studies,since AI has the ability to discriminate benign from malicious files.In this study,an exclusive set of 29 features was collected from trusted implementations,this set was used as a baseline to analyze the presented work in this research.A Decision Tree(DT)and Neural Network Multi-Layer Perceptron(NN-MLPC)algorithms were utilized during this work.Both algorithms were chosen after testing a few diverse procedures.This work implements a method of subgrouping features to answer questions such as,which feature has a positive impact on accuracy when added?Is it possible to determine a reliable feature set to distinguish a malicious PE file from a benign one?when combining features,would it have any effect on malware detection accuracy in a PE file?Results obtained using the proposed method were improved and carried few observations.Generally,the obtained results had practical and numerical parts,for the practical part,the number of features and which features included are the main factors impacting the calculated accuracy,also,the combination of features is as crucial in these calculations.Numerical results included,finding accuracies with enhanced values,for example,NN_MLPC attained 0.979 and 0.98;for DT an accuracy of 0.9825 and 0.986 was attained.
文摘Deep Learning is a powerful technique that is widely applied to Image Recognition and Natural Language Processing tasks amongst many other tasks. In this work, we propose an efficient technique to utilize pre-trained Convolutional Neural Network (CNN) architectures to extract powerful features from images for object recognition purposes. We have built on the existing concept of extending the learning from pre-trained CNNs to new databases through activations by proposing to consider multiple deep layers. We have exploited the progressive learning that happens at the various intermediate layers of the CNNs to construct Deep Multi-Layer (DM-L) based Feature Extraction vectors to achieve excellent object recognition performance. Two popular pre-trained CNN architecture models i.e. the VGG_16 and VGG_19 have been used in this work to extract the feature sets from 3 deep fully connected multiple layers namely “fc6”, “fc7” and “fc8” from inside the models for object recognition purposes. Using the Principal Component Analysis (PCA) technique, the Dimensionality of the DM-L feature vectors has been reduced to form powerful feature vectors that have been fed to an external Classifier Ensemble for classification instead of the Softmax based classification layers of the two original pre-trained CNN models. The proposed DM-L technique has been applied to the Benchmark Caltech-101 object recognition database. Conventional wisdom may suggest that feature extractions based on the deepest layer i.e. “fc8” compared to “fc6” will result in the best recognition performance but our results have proved it otherwise for the two considered models. Our experiments have revealed that for the two models under consideration, the “fc6” based feature vectors have achieved the best recognition performance. State-of-the-Art recognition performances of 91.17% and 91.35% have been achieved by utilizing the “fc6” based feature vectors for the VGG_16 and VGG_19 models respectively. The recognition performance has been achieved by considering 30 sample images per class whereas the proposed system is capable of achieving improved performance by considering all sample images per class. Our research shows that for feature extraction based on CNNs, multiple layers should be considered and then the best layer can be selected that maximizes the recognition performance.