期刊文献+

基于知识蒸馏的DoH流量分类

DoH traffic classification based on knowledge distillation
在线阅读 下载PDF
导出
摘要 为应对传统深度学习方法在DoH流量分类中面临的对大量标注数据的依赖、过拟合风险高和模型解释性差等挑战,提出了一种基于知识蒸馏的DoH流量分类方法。首先,设计了一个包含2个卷积层和2个全连接层的卷积神经网络(convolutional neural network,CNN),用于学生模型与教师模型训练。其次,初始化学生模型和教师模型,使教师模型为学生模型的深度拷贝且参数固定。最后,通过分类损失和一致性损失的加权和进行训练,并使用指数移动平均更新教师模型参数,以提供更稳定的指导。在CIRA-CIC-DoHBrw-2020数据集上的实验结果表明,相较于传统1D-CNN模型,该方法的精确率、召回率、F1分数分别提升了0.13、0.63、0.40百分点,证明了知识蒸馏在提升模型性能方面的有效性。 To address the challenges faced by traditional deep learning methods in DoH traffic classification,such as dependence on large amounts of annotated data,high risk of overfitting,and poor model interpretability,a DoH traffic classification method based on knowledge distillation was proposed.Firstly,a convolutional neural network(CNN)with two convolutional layers and two fully connected layers was designed for student model and teacher model training.Secondly,the student model and the teacher model were initialized to make the teacher model a deep copy of the student model with fixed parameters.Finally,the training was performed by the weighted sum of classification loss and consistency loss,and the teacher model parameters were updated using exponential moving average(EMA)to provide more stable guidance.The experimental results on CIRA-CIC-DoHBrw-2020 dataset show that,compared with the traditional 1D-CNN model,this method improves precision,recall and F1 score by 0.13,0.63,and 0.40 percentage points,demonstrating the effectiveness of the knowledge distillation in improving model performance.
作者 谢艳莉 孙璇 XIE Yanli;SUN Xuan(Computer School,Beijing Information Science&Technology University,Beijing 100192,China)
出处 《北京信息科技大学学报(自然科学版)》 2024年第5期95-102,共8页 Journal of Beijing Information Science and Technology University(Science and Technology Edition)
关键词 知识蒸馏 DoH流量分类 卷积神经网络 均值教师模型 knowledge distillation DoH traffic classification convolutional neural network(CNN) mean teacher model
  • 相关文献

参考文献3

二级参考文献12

共引文献2065

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部