针对深度神经网络(deep neural network,DNN)模型在传统切片与映射方法中存在的资源调度和数据传输瓶颈问题,提出了一种基于片上网络(network on chip,NoC)加速器的高效DNN动态切片与智能映射优化算法。该算法通过动态切片技术灵活划分...针对深度神经网络(deep neural network,DNN)模型在传统切片与映射方法中存在的资源调度和数据传输瓶颈问题,提出了一种基于片上网络(network on chip,NoC)加速器的高效DNN动态切片与智能映射优化算法。该算法通过动态切片技术灵活划分DNN模型的计算任务,并结合智能映射策略优化NoC架构中的任务分配与数据流管理。实验结果表明,与传统方法相比,该算法在计算吞吐量、NoC传输时延、外部内存访问次数和计算能效等方面均显著提升,尤其在复杂模型上表现突出。展开更多
Deep neural network(DNN)models have achieved remarkable performance across diverse tasks,leading to widespread commercial adoption.However,training high-accuracy models demands extensive data,substantial computational...Deep neural network(DNN)models have achieved remarkable performance across diverse tasks,leading to widespread commercial adoption.However,training high-accuracy models demands extensive data,substantial computational resources,and significant time investment,making them valuable assets vulnerable to unauthorized exploitation.To address this issue,this paper proposes an intellectual property(IP)protection framework for DNN models based on feature layer selection and hyper-chaotic mapping.Firstly,a sensitivity-based importance evaluation algorithm is used to identify the key feature layers for encryption,effectively protecting the core components of the model.Next,the L1 regularization criterion is applied to further select high-weight features that significantly impact the model’s performance,ensuring that the encryption process minimizes performance loss.Finally,a dual-layer encryption mechanism is designed,introducing perturbations into the weight values and utilizing hyperchaotic mapping to disrupt channel information,further enhancing the model’s security.Experimental results demonstrate that encrypting only a small subset of parameters effectively reduces model accuracy to random-guessing levels while ensuring full recoverability.The scheme exhibits strong robustness against model pruning and fine-tuning attacks and maintains consistent performance across multiple datasets,providing an efficient and practical solution for authorization-based DNN IP protection.展开更多
针对语音识别中DBN-DNN训练时间过长的问题,提出了一种DBN-DNN网络的快速训练方法。该方法从减少误差反向传播计算量的角度出发,在更新网络参数时,通过交替变换网络更新层数来实现加速;同时,也设计了逐渐减少网络全局更新频率和逐渐减...针对语音识别中DBN-DNN训练时间过长的问题,提出了一种DBN-DNN网络的快速训练方法。该方法从减少误差反向传播计算量的角度出发,在更新网络参数时,通过交替变换网络更新层数来实现加速;同时,也设计了逐渐减少网络全局更新频率和逐渐减少网络更新层数两种实施策略。这种训练方法可以与多种DNN加速训练算法相结合。实验结果表明,在不影响识别率的前提下,该方法独立使用或与随机数据筛选(stochastic data sweeping,SDS)算法、ASGD算法等DNN加速训练算法相结合,都可以取得较为理想的加速结果。展开更多
文摘针对深度神经网络(deep neural network,DNN)模型在传统切片与映射方法中存在的资源调度和数据传输瓶颈问题,提出了一种基于片上网络(network on chip,NoC)加速器的高效DNN动态切片与智能映射优化算法。该算法通过动态切片技术灵活划分DNN模型的计算任务,并结合智能映射策略优化NoC架构中的任务分配与数据流管理。实验结果表明,与传统方法相比,该算法在计算吞吐量、NoC传输时延、外部内存访问次数和计算能效等方面均显著提升,尤其在复杂模型上表现突出。
基金supported in part by the National Natural Science Foundation of China under Grant No.62172280in part by the Key Scientific Research Projects of Colleges and Universities in Henan Province,China under Grant No.23A520006in part by Henan Provincial Science and Technology Research Project under Grant No.222102210199.
文摘Deep neural network(DNN)models have achieved remarkable performance across diverse tasks,leading to widespread commercial adoption.However,training high-accuracy models demands extensive data,substantial computational resources,and significant time investment,making them valuable assets vulnerable to unauthorized exploitation.To address this issue,this paper proposes an intellectual property(IP)protection framework for DNN models based on feature layer selection and hyper-chaotic mapping.Firstly,a sensitivity-based importance evaluation algorithm is used to identify the key feature layers for encryption,effectively protecting the core components of the model.Next,the L1 regularization criterion is applied to further select high-weight features that significantly impact the model’s performance,ensuring that the encryption process minimizes performance loss.Finally,a dual-layer encryption mechanism is designed,introducing perturbations into the weight values and utilizing hyperchaotic mapping to disrupt channel information,further enhancing the model’s security.Experimental results demonstrate that encrypting only a small subset of parameters effectively reduces model accuracy to random-guessing levels while ensuring full recoverability.The scheme exhibits strong robustness against model pruning and fine-tuning attacks and maintains consistent performance across multiple datasets,providing an efficient and practical solution for authorization-based DNN IP protection.
文摘针对语音识别中DBN-DNN训练时间过长的问题,提出了一种DBN-DNN网络的快速训练方法。该方法从减少误差反向传播计算量的角度出发,在更新网络参数时,通过交替变换网络更新层数来实现加速;同时,也设计了逐渐减少网络全局更新频率和逐渐减少网络更新层数两种实施策略。这种训练方法可以与多种DNN加速训练算法相结合。实验结果表明,在不影响识别率的前提下,该方法独立使用或与随机数据筛选(stochastic data sweeping,SDS)算法、ASGD算法等DNN加速训练算法相结合,都可以取得较为理想的加速结果。