期刊文献+
共找到3篇文章
< 1 >
每页显示 20 50 100
P4DE-CI:Privacy and Delay Dual-Driven Device-Edge Collaborative Inference for Intelligent Internet of Things
1
作者 Han Shujun Yan Kaiwen +3 位作者 Zhang Wenzhao Xu Xiaodong Wang Bizhu Tao Xiaofeng 《China Communications》 2025年第12期224-239,共16页
In 6G,artificial intelligence represented by deep nerual network(DNN)will unleash its potential and empower IoT applications to transform into intelligent IoT applications.However,whole DNNbased inference is difficult... In 6G,artificial intelligence represented by deep nerual network(DNN)will unleash its potential and empower IoT applications to transform into intelligent IoT applications.However,whole DNNbased inference is difficult to carry out on resourceconstrained intelligent IoT devices and will suffer privacy leakage when offloading to the cloud or mobile edge computation server(MECs).In this paper,we formulate a privacy and delay dual-driven device-edge collaborative inference(P4DE-CI)system to preserve the privacy of raw data while accelerating the intelligent inference process,where the intelligent IoT devices run the front-end part of DNN model and the MECs execute the back-end part of DNN model.Considering three typical privacy leakage models and the end-to-end delay of collaborative DNN-based inference,we define a novel intelligent inference Quality of service(I2-QoS)metric as the weighted summation of the inference latency and privacy preservation level.Moreover,we propose a DDPG-based joint DNN model optimization and resource allocation algorithm to maximize I2-QoS,by optimizing the association relationship between intelligent IoT devices and MECs,the DNN model placement decision,and the DNN model partition decision.Experiments carried out on the AlexNet model reveal that the proposed algorithm has better performance in both privacy-preserving and inference-acceleration. 展开更多
关键词 device-edge collaborative inference DNN model placement and partition inference delay PRIVACY-PRESERVING resource allocation
在线阅读 下载PDF
Device-edge collaborative occluded face recognition method based on cross-domain feature fusion
2
作者 Puning Zhang Lei Tan +3 位作者 Zhigang Yang Fengyi Huang Lijun Sun Haiying Peng 《Digital Communications and Networks》 2025年第2期482-492,共11页
The lack of facial features caused by wearing masks degrades the performance of facial recognition systems.Traditional occluded face recognition methods cannot integrate the computational resources of the edge layer a... The lack of facial features caused by wearing masks degrades the performance of facial recognition systems.Traditional occluded face recognition methods cannot integrate the computational resources of the edge layer and the device layer.Besides,previous research fails to consider the facial characteristics including occluded and unoccluded parts.To solve the above problems,we put forward a device-edge collaborative occluded face recognition method based on cross-domain feature fusion.Specifically,the device-edge collaborative face recognition architecture gets the utmost out of maximizes device and edge resources for real-time occluded face recognition.Then,a cross-domain facial feature fusion method is presented which combines both the explicit domain and the implicit domain facial.Furthermore,a delay-optimized edge recognition task scheduling method is developed that comprehensively considers the task load,computational power,bandwidth,and delay tolerance constraints of the edge.This method can dynamically schedule face recognition tasks and minimize recognition delay while ensuring recognition accuracy.The experimental results show that the proposed method achieves an average gain of about 21%in recognition latency,while the accuracy of the face recognition task is basically the same compared to the baseline method. 展开更多
关键词 Occluded face recognition Cross-domain feature fusion device-edge collaboration
在线阅读 下载PDF
Resource-Constrained Edge AI with Early Exit Prediction
3
作者 Rongkang Dong Yuyi Mao Jun Zhang 《Journal of Communications and Information Networks》 EI CSCD 2022年第2期122-134,共13页
By leveraging the data sample diversity,the early-exit network recently emerges as a prominent neural network architecture to accelerate the deep learning inference process.However,intermediate classifiers of the earl... By leveraging the data sample diversity,the early-exit network recently emerges as a prominent neural network architecture to accelerate the deep learning inference process.However,intermediate classifiers of the early exits introduce additional computation overhead,which is unfavorable for resource-constrained edge artificial intelligence(AI).In this paper,we propose an early exit prediction mechanism to reduce the on-device computation overhead in a device-edge co-inference system supported by early-exit networks.Specifically,we design a low-complexity module,namely the exit predictor,to guide some distinctly“hard”samples to bypass the computation of the early exits.Besides,considering the varying communication bandwidth,we extend the early exit prediction mechanism for latency-aware edge inference,which adapts the prediction thresholds of the exit predictor and the confidence thresholds of the early-exit network via a few simple regression models.Extensive experiment results demonstrate the effectiveness of the exit predictor in achieving a better tradeoff between accuracy and on-device computation overhead for early-exit networks.Besides,compared with the baseline methods,the proposed method for latency-aware edge inference attains higher inference accuracy under different bandwidth conditions. 展开更多
关键词 artificial intelligence(AI) edge AI device-edge cooperative inference early-exit network early exit prediction
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部