期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
DeAOff:Dependence-Aware Offloading of Decoder-Based Generative Models for Edge Computing
1
作者 Ning Jiahong Yang Tingting +3 位作者 Zheng Ce Wang Xinghan Feng Ping Zhang Xiufeng 《China Communications》 2025年第7期14-29,共16页
This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,suc... This paper presents an algorithm named the dependency-aware offloading framework(DeAOff),which is designed to optimize the deployment of Gen-AI decoder models in mobile edge computing(MEC)environments.These models,such as decoders,pose significant challenges due to their interlayer dependencies and high computational demands,especially under edge resource constraints.To address these challenges,we propose a two-phase optimization algorithm that first handles dependencyaware task allocation and subsequently optimizes energy consumption.By modeling the inference process using directed acyclic graphs(DAGs)and applying constraint relaxation techniques,our approach effectively reduces execution latency and energy usage.Experimental results demonstrate that our method achieves a reduction of up to 20%in task completion time and approximately 30%savings in energy consumption compared to traditional methods.These outcomes underscore our solution’s robustness in managing complex sequential dependencies and dynamic MEC conditions,enhancing quality of service.Thus,our work presents a practical and efficient resource optimization strategy for deploying models in resourceconstrained MEC scenarios. 展开更多
关键词 dependency-aware offloading(DeAOff) directed acyclic graph(DAG) generative AI(Gen-AI) mobile edge computing(MEC)
在线阅读 下载PDF
mproving Cache Management with Redundant RDDs Eviction in Spark 被引量:2
2
作者 Yao Zhao Jian Dong +2 位作者 Hongwei Liu Jin Wu Yanxin Liu 《Computers, Materials & Continua》 SCIE EI 2021年第7期727-741,共15页
Efcient cache management plays a vital role in in-memory dataparallel systems,such as Spark,Tez,Storm and HANA.Recent research,notably research on the Least Reference Count(LRC)and Most Reference Distance(MRD)policies... Efcient cache management plays a vital role in in-memory dataparallel systems,such as Spark,Tez,Storm and HANA.Recent research,notably research on the Least Reference Count(LRC)and Most Reference Distance(MRD)policies,has shown that dependency-aware caching management practices that consider the application’s directed acyclic graph(DAG)perform well in Spark.However,these practices ignore the further relationship between RDDs and cached some redundant RDDs with the same child RDDs,which degrades the memory performance.Hence,in memory-constrained situations,systems may encounter a performance bottleneck due to frequent data block replacement.In addition,the prefetch mechanisms in some cache management policies,such as MRD,are hard to trigger.In this paper,we propose a new cache management method called RDE(Redundant Data Eviction)that can fully utilize applications’DAG information to optimize the management result.By considering both RDDs’dependencies and the reference sequence,we effectively evict RDDs with redundant features and perfect the memory for incoming data blocks.Experiments show that RDE improves performance by an average of 55%compared to LRU and by up to 48%and 20%compared to LRC and MRD,respectively.RDE also shows less sensitivity to memory bottlenecks,which means better availability in memory-constrained environments. 展开更多
关键词 dependency-aware cache management in-memory computing SPARK
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部