期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
Enhancing the generalization capability of 2D array pointer networks through multiple teacher-forcing knowledge distillation
1
作者 Qidong Liu Xin Shen +3 位作者 Chaoyue Liu Dong Chen Xin Zhou Mingliang Xu 《Journal of Automation and Intelligence》 2025年第1期29-38,共10页
The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combin... The Heterogeneous Capacitated Vehicle Routing Problem(HCVRP),which involves efficiently routing vehicles with diverse capacities to fulfill various customer demands at minimal cost,poses an NP-hard challenge in combinatorial optimization.Recently,reinforcement learning approaches such as 2D Array Pointer Networks(2D-Ptr)have demonstrated remarkable speed in decision-making by modeling multiple agents’concurrent choices as a sequence of consecutive actions.However,these learning-based models often struggle with generalization,meaning they cannot seamlessly adapt to new scenarios with varying numbers of vehicles or customers without retraining.Inspired by the potential of multi-teacher knowledge distillation to harness diverse knowledge from multiple sources and craft a comprehensive student model,we propose to enhance the generalization capability of 2D-Ptr through Multiple Teacher-forcing Knowledge Distillation(MTKD).We initially train 12 unique 2D-Ptr models under various settings to serve as teacher models.Subsequently,we randomly sample a teacher model and a batch of problem instances,focusing on those where the chosen teacher performed best.This teacher model then solves these instances,generating high-reward action sequences to guide knowledge transfer to the student model.We conduct rigorous evaluations across four distinct datasets,each comprising four HCVRP instances of varying scales.Our empirical findings underscore the proposed method superiority over existing learning-based methods in terms of both computational efficiency and solution quality. 展开更多
关键词 Vehicle routing problem multi-teacher knowledge distillation Teacher-forcing Pointer network
在线阅读 下载PDF
DSBP:Data-Free and Swift Backdoor Purification for Trustworthy Federated Learning via Multi-Teacher Adversarial Distillation
2
作者 Gao-Lei Li Jun Wu +3 位作者 Jian-Hua Li Yuan-Yuan Zhao Hui-Juan Lian Long-Fei Zheng 《Journal of Computer Science & Technology》 2025年第6期1563-1576,共14页
Federated learning(FL)faces severe backdoor threats.Due to the inaccessibility of clean samples,the parameter server cannot clean them up in real time even if poisoning features are discovered.Meanwhile,existing backd... Federated learning(FL)faces severe backdoor threats.Due to the inaccessibility of clean samples,the parameter server cannot clean them up in real time even if poisoning features are discovered.Meanwhile,existing backdoor defense methods always require sacrificing model accuracy or increasing communication delay in exchange for better FL trustworthiness.To address these challenges,we propose a novel data-free and swift backdoor purification(DSBP)scheme based on multi-teacher adversarial distillation to effectively erase various backdoor variants in FL.DSBP treats the purification task as an adversarial game process between knowledge inheritance and backdoor inhibition by enforcing the student model to learn ensemble results of multiple teacher models on reconstructed clean samples,while being insensitive to synthetic poisoned samples.In DSBP,we utilize the self-similarity of poisoned features to optimize the trigger generator and accelerate the convergence of DSBP during the adversarial distillation process.We validate the effectiveness of DBSP by comparing it with four state-of-the-art defense methods against three backdoor variants on three datasets.The average attack success rate can be reduced from 96.6%to 2.3%with only 300 rounds. 展开更多
关键词 federated learning(FL) multi-teacher distillation backdoor attack data-free backdoor purification
原文传递
A Lightweight IoT Malware Detection and Family Classification Method
3
作者 Changguang Wang Ziqi Ma +2 位作者 Qingru Li Dongmei Zhao Fangwei Wang 《Journal of Computer and Communications》 2024年第4期201-227,共27页
A lightweight malware detection and family classification system for the Internet of Things (IoT) was designed to solve the difficulty of deploying defense models caused by the limited computing and storage resources ... A lightweight malware detection and family classification system for the Internet of Things (IoT) was designed to solve the difficulty of deploying defense models caused by the limited computing and storage resources of IoT devices. By training complex models with IoT software gray-scale images and utilizing the gradient-weighted class-activated mapping technique, the system can identify key codes that influence model decisions. This allows for the reconstruction of gray-scale images to train a lightweight model called LMDNet for malware detection. Additionally, the multi-teacher knowledge distillation method is employed to train KD-LMDNet, which focuses on classifying malware families. The results indicate that the model’s identification speed surpasses that of traditional methods by 23.68%. Moreover, the accuracy achieved on the Malimg dataset for family classification is an impressive 99.07%. Furthermore, with a model size of only 0.45M, it appears to be well-suited for the IoT environment. By training complex models using IoT software gray-scale images and utilizing the gradient-weighted class-activated mapping technique, the system can identify key codes that influence model decisions. This allows for the reconstruction of gray-scale images to train a lightweight model called LMDNet for malware detection. Thus, the presented approach can address the challenges associated with malware detection and family classification in IoT devices. 展开更多
关键词 IoT Security Visual Explanations multi-teacher Knowledge Distillation Lightweight CNN
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部