期刊文献+
共找到6篇文章
< 1 >
每页显示 20 50 100
Electricity Theft Detection Method Based on Ensemble Learning and Prototype Learning
1
作者 Xinwu Sun Jiaxiang Hu +4 位作者 Zhenyuan Zhang Di Cao Qi Huang Zhe Chen Weihao Hu 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2024年第1期213-224,共12页
With the development of advanced metering infrastructure(AMI),large amounts of electricity consumption data can be collected for electricity theft detection.However,the imbalance of electricity consumption data is vio... With the development of advanced metering infrastructure(AMI),large amounts of electricity consumption data can be collected for electricity theft detection.However,the imbalance of electricity consumption data is violent,which makes the training of detection model challenging.In this case,this paper proposes an electricity theft detection method based on ensemble learning and prototype learning,which has great performance on imbalanced dataset and abnormal data with different abnormal level.In this paper,convolutional neural network(CNN)and long short-term memory(LSTM)are employed to obtain abstract feature from electricity consumption data.After calculating the means of the abstract feature,the prototype per class is obtained,which is used to predict the labels of unknown samples.In the meanwhile,through training the network by different balanced subsets of training set,the prototype is representative.Compared with some mainstream methods including CNN,random forest(RF)and so on,the proposed method has been proved to effectively deal with the electricity theft detection when abnormal data only account for 2.5%and 1.25%of normal data.The results show that the proposed method outperforms other state-of-the-art methods. 展开更多
关键词 Electricity theft detection ensemble learning prototype learning imbalanced dataset deep learning abnormal level
原文传递
Imputation with Inter-Series Information from Prototypes for Healthcare Time Series
2
作者 Zhi-Hao Yu Lian-Tao Ma +1 位作者 Ya-Sha Wang Xu Chu 《Journal of Computer Science & Technology》 2025年第6期1499-1511,共13页
Time series with missing values are ubiquitous in healthcare scenarios,presenting significant challenges for analysis.Despite existing methods addressing imputation,they predominantly focus on leveraging intra-series ... Time series with missing values are ubiquitous in healthcare scenarios,presenting significant challenges for analysis.Despite existing methods addressing imputation,they predominantly focus on leveraging intra-series information,neglecting the potential benefits that inter-series information could provide,such as reducing uncertainty and memorization effect.To bridge this gap,we propose PRIME,Prototype Recurrent Imputation ModEl,which integrates both intra-series and inter-series information for imputing missing values in irregularly sampled time series.PRIME comprises a prototype memory module for learning inter-series information,a bidirectional gated recurrent unit utilizing prototype information for imputation,and an attentive prototypical refinement module for adjusting imputations.We conduct extensive experiments on four datasets,and the results underscore PRIME’s superiority over the state-of-the-art models by up to 26%relative improvement in mean square error.Our code is available at https://jcst.ict.ac.cn/news/382. 展开更多
关键词 time series imputation healthcare time series analysis prototype learning
原文传递
CMSL:Cross-modal Style Learning for Few-shot Image Generation
3
作者 Yue Jiang Yueming Lyu +2 位作者 Bo Peng Wei Wang Jing Dong 《Machine Intelligence Research》 2025年第4期752-768,共17页
Training generative adversarial networks is data-demanding,which limits the development of these models on target domains with inadequate training data.Recently,researchers have leveraged generative models pretrained ... Training generative adversarial networks is data-demanding,which limits the development of these models on target domains with inadequate training data.Recently,researchers have leveraged generative models pretrained on sufficient data and fine-tuned them using small training samples,thus reducing data requirements.However,due to the lack of explicit focus on target styles and disproportionately concentrating on generative consistency,these methods do not perform well in diversity preservation which represents the adaptation ability for few-shot generative models.To mitigate the diversity degradation,we propose a framework with two key strategies:1)To obtain more diverse styles from limited training data effectively,we propose a cross-modal module that explicitly obtains the target styles with a style prototype space and text-guided style instructions.2)To inherit the generation capability from the pretrained model,we aim to constrain the similarity between the generated and source images with a structural discrepancy alignment module by maintaining the structure correlation in multiscale areas.We demonstrate the effectiveness of our method,which outperforms state-of-the-art methods in mitigating diversity degradation through extensive experiments and analyses. 展开更多
关键词 Few-shot image generation cross-modal learning prototype learning contrastive learning computer vision
原文传递
Multi-Label Prototype-Aware Structured Contrastive Distillation
4
作者 Yuelong Xia Yihang Tong +4 位作者 Jing Yang Xiaodi Sun Yungang Zhang Huihua Wang Lijun Yun 《Tsinghua Science and Technology》 2025年第4期1808-1830,共23页
Knowledge distillation has demonstrated considerable success in scenarios involving multi-class single-label learning.However,its direct application to multi-label learning proves challenging due to complex correlatio... Knowledge distillation has demonstrated considerable success in scenarios involving multi-class single-label learning.However,its direct application to multi-label learning proves challenging due to complex correlations in multi-label structures,causing student models to overlook more finely structured semantic relations present in the teacher model.In this paper,we present a solution called multi-label prototype-aware structured contrastive distillation,comprising two modules:Prototype-aware Contrastive Representation Distillation(PCRD)and prototype-aware cross-image structure distillation.The PCRD module maximizes the mutual information of prototype-aware representation between the student and teacher,ensuring semantic representation structure consistency to improve the compactness of intra-class and dispersion of inter-class representations.In the PCSD module,we introduce sample-to-sample and sample-to-prototype structured contrastive distillation to model prototype-aware cross-image structure consistency,guiding the student model to maintain a coherent label semantic structure with the teacher across multiple instances.To enhance prototype guidance stability,we introduce batch-wise dynamic prototype correction for updating class prototypes.Experimental results on three public benchmark datasets validate the effectiveness of our proposed method,demonstrating its superiority over state-of-the-art methods. 展开更多
关键词 multi-label knowledge distillation prototype-aware Contrastive Representation Distillation(PCRD) prototype-aware Cross-image Structure Distillation(PCSD) multi-label prototype learning
原文传递
Prototype-guided cross-task knowledge distillation
5
作者 Deng LI Peng LI +1 位作者 Aming WU Yahong HAN 《Frontiers of Information Technology & Electronic Engineering》 2025年第6期912-929,共18页
Recently,large-scale pretrained models have revealed their benefits in various tasks.However,due to the enormous computation complexity and storage demands,it is challenging to apply large-scale models to real scenari... Recently,large-scale pretrained models have revealed their benefits in various tasks.However,due to the enormous computation complexity and storage demands,it is challenging to apply large-scale models to real scenarios.Existing knowledge distillation methods require mainly the teacher model and the student model to share the same label space,which restricts their application in real scenarios.To alleviate the constraint of different label spaces,we propose a prototype-guided cross-task knowledge distillation(ProC-KD)method to migrate the intrinsic local-level object knowledge of the teacher network to various task scenarios.First,to better learn the generalized knowledge in cross-task scenarios,we present a prototype learning module to learn the invariant intrinsic local representation of objects from the teacher network.Second,for diverse downstream tasks,a task-adaptive feature augmentation module is proposed to enhance the student network features with the learned generalization prototype representations and guide the learning of the student network to improve its generalization ability.Experimental results on various visual tasks demonstrate the effectiveness of our approach for cross-task knowledge distillation scenarios. 展开更多
关键词 Knowledge distillation Cross-task prototype learning
原文传递
Prototypical clustered federated learning for heart rate prediction
6
作者 Yongjie YIN Hui RUAN +5 位作者 Yang CHEN Jiong CHEN Ziyue LI Xiang SU Yipeng ZHOU Qingyuan GONG 《Frontiers of Information Technology & Electronic Engineering》 2025年第10期1896-1912,共17页
Predicting future heart rate(HR)not only helps in detecting abnormal heart rhythms but also provides timely support for downstream health monitoring services.Existing methods for HR prediction encounter challenges,esp... Predicting future heart rate(HR)not only helps in detecting abnormal heart rhythms but also provides timely support for downstream health monitoring services.Existing methods for HR prediction encounter challenges,especially concerning privacy protection and data heterogeneity.To address these challenges,this paper proposes a novel HR prediction framework,PCFedH,which leverages personalized federated learning and prototypical contrastive learning to achieve stable clustering results and more accurate predictions.PCFedH contains two core modules:a prototypical contrastive learning-based federated clustering module,which characterizes data heterogeneity and enhances HR representation to facilitate more effective clustering,and a two-phase soft clustered federated learning module,which enables personalized performance improvements for each local model based on stable clustering results.Experimental results on two real-world datasets demonstrate the superiority of our approach over state-of-the-art methods,achieving an average reduction of 3.1%in the mean squared error across both datasets.Additionally,we conduct comprehensive experiments to empirically validate the effectiveness of the key components in the proposed method.Among these,the personalization component is identified as the most crucial aspect of our design,indicating its substantial impact on overall performance. 展开更多
关键词 Federated learning Heart rate prediction Prototypical contrastive learning
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部