期刊文献+
共找到1,415篇文章
< 1 2 71 >
每页显示 20 50 100
Age-Optimal Cached Distribution in the Satellite-Integrated Internet of Things via Cross-Slot Directed Graph
1
作者 Hu Zhouyong Li Yue +1 位作者 Zhang Hanxu Yang Zhihua 《China Communications》 2025年第6期300-318,共19页
In the Satellite-integrated Internet of Things(S-IoT),data freshness in the time-sensitive scenarios could not be guaranteed over the timevarying topology with current distribution strategies aiming to reduce the tran... In the Satellite-integrated Internet of Things(S-IoT),data freshness in the time-sensitive scenarios could not be guaranteed over the timevarying topology with current distribution strategies aiming to reduce the transmission delay.To address this problem,in this paper,we propose an age-optimal caching distribution mechanism for the high-timeliness data collection in S-IoT by adopting a freshness metric,as called age of information(AoI)through the caching-based single-source multidestinations(SSMDs)transmission,namely Multi-AoI,with a well-designed cross-slot directed graph(CSG).With the proposed CSG,we make optimizations on the locations of cache nodes by solving a nonlinear integer programming problem on minimizing Multi-AoI.In particular,we put up forward three specific algorithms respectively for improving the Multi-AoI,i.e.,the minimum queuing delay algorithm(MQDA)based on node deviation from average level,the minimum propagation delay algorithm(MPDA)based on the node propagation delay reduction,and a delay balanced algorithm(DBA)based on node deviation from average level and propagation delay reduction.The simulation results show that the proposed mechanism can effectively improve the freshness of information compared with the random selection algorithm. 展开更多
关键词 age of information cached distribution satellite-integrated internet of things time-varying graph
在线阅读 下载PDF
Memcached分布式缓存系统的应用 被引量:2
2
作者 常广炎 《电脑编程技巧与维护》 2017年第7期24-25,共2页
Memcached是一个高性能的分布式内存对象缓存系统,常用于动态Web应用以减轻数据库负载,它通过在内存缓存数据和对象来减少读取数据库的次数,从而减轻数据库的访问负载,加快了网站响应速度,提高系统的查询性能,从而使分布式系统不必考虑... Memcached是一个高性能的分布式内存对象缓存系统,常用于动态Web应用以减轻数据库负载,它通过在内存缓存数据和对象来减少读取数据库的次数,从而减轻数据库的访问负载,加快了网站响应速度,提高系统的查询性能,从而使分布式系统不必考虑数据缓存的问题,具有更高的可扩展性。 展开更多
关键词 分布式缓存 CACHE技术 数据库
在线阅读 下载PDF
符合粒子输运模拟的专用加速器体系结构
3
作者 张建民 刘津津 +1 位作者 许炜康 黎铁军 《国防科技大学学报》 北大核心 2025年第2期155-164,共10页
粒子输运模拟是高性能计算机的主要应用,对于其日益增长的计算规模需求,通用微处理器由于其单核结构复杂,无法适应程序特征,难以获得较高的性能功耗比。因此,对求解粒子输运非确定性数值模拟的程序特征进行提取与分析;基于算法特征,对... 粒子输运模拟是高性能计算机的主要应用,对于其日益增长的计算规模需求,通用微处理器由于其单核结构复杂,无法适应程序特征,难以获得较高的性能功耗比。因此,对求解粒子输运非确定性数值模拟的程序特征进行提取与分析;基于算法特征,对开源微处理器内核架构进行定制设计,包括加速器流水线结构、分支预测部件、多级Cache层次与主存设计,构建一种符合粒子输运程序特征的专用加速器体系结构。在业界通用体系结构模拟器上运行粒子输运程序的模拟结果表明,与ARM Cortex-A15相比,所提出的专用加速器体系结构在同等功耗下可获得4.6倍的性能提升,在同等面积下可获得3.2倍的性能提升。 展开更多
关键词 粒子输运模拟 专用加速器 程序特征 分支预测 多级Cache
在线阅读 下载PDF
一种高性能PCIe接口设计与实现
4
作者 张梅娟 辛昆鹏 周迁 《现代电子技术》 北大核心 2025年第8期70-74,共5页
多款处理器在PCIe 2.0×4下传输速率不足理论带宽的20%,最高仅有380 MB/s,不能满足实际应用需求。为解决嵌入式处理器PCIe接口传输速率过低的问题,设计一款高性能PCIe接口,有效提高了接口数据传输速率。经性能瓶颈系统分析,增加设计... 多款处理器在PCIe 2.0×4下传输速率不足理论带宽的20%,最高仅有380 MB/s,不能满足实际应用需求。为解决嵌入式处理器PCIe接口传输速率过低的问题,设计一款高性能PCIe接口,有效提高了接口数据传输速率。经性能瓶颈系统分析,增加设计PCIe DMA与处理器Cache一致性功能,能解决DMA传输完成后软件Cache同步耗时严重的问题,使速率提升3.8倍,达到1 450 MB/s。在硬件设计上DMA支持链表模式,通过描述符链表将分散的内存集聚起来,一次DMA启动可完成多个非连续地址内存的数据传输,并优化与改进软件驱动中分散集聚DMA实现方式,充分利用硬件Cache一致性功能,进一步提升10%的传输速率,最终达到PCIe 2.0×4理论带宽的80%。此外,该PCIe接口采用多通道DMA的设计,最大支持8路独立DMA读写通道,可应用于多核多任务并行传输数据的应用场景,更进一步提升整体数据传输带宽。经验证,该PCIe接口具有良好的稳定性和高效性,最大可支持8通道数据并行传输,且单通道传输速率可达到理论速率的80%。 展开更多
关键词 PCIe接口 DMA控制器 高速数据传输 CACHE一致性 多通道设计 分散集聚 链表模式
在线阅读 下载PDF
面向高性能DSP的一级可配置指令缓存设计与验证
5
作者 唐俊龙 高睿禧 《集成电路与嵌入式系统》 2025年第5期24-34,共11页
针对程序运行中Cache无法有效预测非局部访问的问题,提出了一种基于二级存储结构的高安全性一级可配置指令缓存设计方案。该方案通过页与Cache行的两种粒度存储保护机制,确保不同级别用户的数据安全;实现了内部控制寄存器和灵活可配置的... 针对程序运行中Cache无法有效预测非局部访问的问题,提出了一种基于二级存储结构的高安全性一级可配置指令缓存设计方案。该方案通过页与Cache行的两种粒度存储保护机制,确保不同级别用户的数据安全;实现了内部控制寄存器和灵活可配置的Cache/SRAM结构,支持快速配置和扩展;利用直接存储访问模块实现了与外部存储的高效交互。通过UVM平台进行模块级验证,并对比不同L1P大小配置下的命中率,调用40 nm低阈值库验证了系统的时延和功耗性能。实验结果表明,所设计的缓存方案能在32 KB至0 KB五种L1P配置间快速切换,满足600 MHz高性能DSP的需求,最大路径延时为1.47 ns,总功耗为309.577 mW。 展开更多
关键词 一级指令缓存 UVM验证学 存储保护 DSP CACHE
在线阅读 下载PDF
一种嵌入式微控制器的指令Cache设计方案
6
作者 王睿 张艳花 《电子制作》 2025年第4期22-25,共4页
在传统的指令Cache设计方案中,VTag存储器大小取决于处理器的总线宽度以及指令Cache的Cache行数和Cache行大小。本文提出了一种优化指令Cache的电路设计,通过减小指令Cache覆盖的取指空间,在不影响指令Cache对外接口的总线宽度以及Cach... 在传统的指令Cache设计方案中,VTag存储器大小取决于处理器的总线宽度以及指令Cache的Cache行数和Cache行大小。本文提出了一种优化指令Cache的电路设计,通过减小指令Cache覆盖的取指空间,在不影响指令Cache对外接口的总线宽度以及Cache行数和Cache大小的情况下,进一步减小了VTag存储器的大小。本文用DC(Design Compiler)对指令Cache进行综合,结果表明,本文设计实现的指令Cache较传统方案在减少芯片面积的同时,显著提升了电路频率,对嵌入式微控制器的指令Cache设计具有重要的实用价值。 展开更多
关键词 嵌入式 微控制器 处理器 指令CACHE
在线阅读 下载PDF
Rubyphi:面向gem5的Cache一致性协议自动化模型检验
7
作者 徐学政 方健 +4 位作者 梁少杰 王璐 黄安文 隋京高 李琼 《计算机工程与科学》 北大核心 2025年第7期1141-1151,共11页
Cache一致性协议是多核系统数据一致性的保障,也直接影响内存子系统的性能,一直是微处理器设计和验证的重点。Cache一致性协议的设计和优化通常需借助gem5等软件模拟器快速实现。同时,由于协议设计中存在的错误在仿真测试中具有难触发... Cache一致性协议是多核系统数据一致性的保障,也直接影响内存子系统的性能,一直是微处理器设计和验证的重点。Cache一致性协议的设计和优化通常需借助gem5等软件模拟器快速实现。同时,由于协议设计中存在的错误在仿真测试中具有难触发、难定位和难修复的特点,需借助Murphi等模型检验工具进行形式化验证。然而,基于模拟器的协议设计优化和基于模型检验的协议验证在编程语言和抽象层次上存在巨大差异,设计者需要分别进行模拟器实现和模型检验建模,这不仅增加了时间成本,也为二者的等价性带来了隐患。设计并实现了面向gem5模拟器的Cache一致性协议自动化模型检验方法Rubyphi,通过提取gem5中实现的协议,自动完成基于Murphi的模型检验建模,进而对协议进行形式化验证。实验表明,Rubyphi能够有效地完成gem5中一致性协议的建模和验证,并成功发现了2个gem5现有协议中存在的错误,相关问题和解决方案已得到社区确认。 展开更多
关键词 CACHE一致性协议 多核处理器 模型检验 形式化验证
在线阅读 下载PDF
FlyCache:Recommendation-driven edge caching architecture for full life cycle of video streaming
8
作者 Shaohua Cao Quancheng Zheng +4 位作者 Zijun Zhan Yansheng Yang Huaqi Lv Danyang Zheng Weishan Zhang 《Digital Communications and Networks》 2025年第4期961-973,共13页
With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for opti... With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate. 展开更多
关键词 Edge caching Cache architecture Cache placement Cache admission Caching eviction
在线阅读 下载PDF
Resource Allocation of UAV-Assisted Mobile Edge Computing Systems with Caching
9
作者 Pu Dan Feng Wenjiang Zhang Juntao 《China Communications》 2025年第10期269-279,共11页
In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope wit... In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope with computation-intensive and/or time-sensitive tasks,part of tasks is offloaded to the UAV side,and UAV process them with its own computing resources and caching resources.Thus,the burden of IoTDs gets relieved under the satisfaction of the quality of service(QoS)require-ments.However,owing to the limited resources of UAV,the cost of whole system,i.e.,that is defined as the weighted sum of energy consumption and time de-lay with caching,should be further optimized while the objective function and the constraints are non-convex.Therefore,we first jointly optimize commu-nication resources B,computing resources F and of-floading rates X with alternating iteration and convex optimization method,and then determine the value of caching decision Y with branch-and-bound(BB)al-gorithm.Numerical results show that UAV assisting partial task offloading with content caching is supe-rior to local computing and full offloading mechanism without caching,and meanwhile the cost of whole sys-tem gets further optimized with our proposed scheme. 展开更多
关键词 CACHING MEC resource allocation UAV
在线阅读 下载PDF
Utility-Driven Edge Caching Optimization with Deep Reinforcement Learning under Uncertain Content Popularity
10
作者 Mingoo Kwon Kyeongmin Kim Minseok Song 《Computers, Materials & Continua》 2025年第10期519-537,共19页
Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the... Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the benefit obtained per unit of cache bandwidth usage,degrades when static or greedy caching strategies fail to adapt to changing demand patterns.To address this,we propose a deep reinforcement learning(DRL)-based caching framework built upon the proximal policy optimization(PPO)algorithm.Our approach formulates edge caching as a sequential decision-making problem and introduces a reward model that balances cache hit performance and utility by prioritizing high-demand,high-quality content while penalizing degraded quality delivery.We construct a realistic synthetic dataset that captures both temporal variations and shifting content popularity to validate our model.Experimental results demonstrate that our proposed method improves utility by up to 135.9%and achieves an average improvement of 22.6%compared to traditional greedy algorithms and long short-term memory(LSTM)-based prediction models.Moreover,our method consistently performs well across a variety of utility functions,workload distributions,and storage limitations,underscoring its adaptability and robustness in dynamic video caching environments. 展开更多
关键词 Edge caching video-on-demand reinforcement learning utility optimization
在线阅读 下载PDF
An Efficient Content Caching Strategy for Fog-Enabled Road Side Units in Vehicular Networks
11
作者 Faareh Ahmed Babar Mansoor +1 位作者 Muhammad Awais Javed Abdul Khader Jilani Saudagar 《Computer Modeling in Engineering & Sciences》 2025年第9期3783-3804,共22页
Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(... Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(QoS).To overcome this,caching frequently requested content at fog-enabled Road Side Units(RSUs)reduces communication latency.Yet,the limited caching capacity of RSUs makes it impractical to store all contents with varying sizes and popularity.This research proposes an efficient content caching algorithm that adapts to dynamic vehicular demands on highways to maximize request satisfaction.The scheme is evaluated against Intelligent Content Caching(ICC)and Random Caching(RC).The obtained results show that our proposed scheme entertains more contentrequesting vehicles as compared to ICC and RC,with 33%and 41%more downloaded data in 28%and 35%less amount of time from ICC and RC schemes,respectively. 展开更多
关键词 Vehicular networks fog computing content caching infotainment services
在线阅读 下载PDF
A knowledge graph-based reinforcement learning approach for cooperative caching in MEC-enabled heterogeneous networks
12
作者 Dan Wang Yalu Bai Bin Song 《Digital Communications and Networks》 2025年第4期1236-1244,共9页
Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of conge... Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of congestion as well as the need for more resources and more dedicated caching schemes.Recently,Multi-access Edge Computing(MEC)-enabled heterogeneous networks,which leverage edge caches for proximity delivery,have emerged as a promising solution to all of these problems.Designing an effective edge caching scheme is critical to its success,however,in the face of limited resources.We propose a novel Knowledge Graph(KG)-based Dueling Deep Q-Network(KG-DDQN)for cooperative caching in MEC-enabled heterogeneous networks.The KGDDQN scheme leverages a KG to uncover video relations,providing valuable insights into user preferences for the caching scheme.Specifically,the KG guides the selection of related videos as caching candidates(i.e.,actions in the DDQN),thus providing a rich reference for implementing a personalized caching scheme while also improving the decision efficiency of the DDQN.Extensive simulation results validate the convergence effectiveness of the KG-DDQN,and it also outperforms baselines regarding cache hit rate and service delay. 展开更多
关键词 Multi-access edge computing Cooperative caching Resource allocation Knowledge graph Reinforcement learning
在线阅读 下载PDF
A Hierarchical-Based Sequential Caching Scheme in Named Data Networking
13
作者 Zhang Junmin Jin Jihuan +3 位作者 Hou Rui Dong Mianxiong Kaoru Ota Zeng Deze 《China Communications》 2025年第5期48-60,共13页
Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently r... Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently realize load balancing.However,such a ubiquitous caching approach may cause problems including duplicate caching and low data diversity,thus reducing the caching efficiency of NDN routers.To mitigate these caching problems and improve the NDN caching efficiency,in this paper,a hierarchical-based sequential caching(HSC)scheme is proposed.In this scheme,the NDN routers in the data transmission path are divided into various levels and data with different request frequencies are cached in distinct router levels.The aim is to cache data with high request frequencies in the router that is closest to the content requester to increase the response probability of the nearby data,improve the data caching efficiency of named data networks,shorten the response time,and reduce cache redundancy.Simulation results show that this scheme can effectively improve the cache hit rate(CHR)and reduce the average request delay(ARD)and average route hop(ARH). 展开更多
关键词 hierarchical router named data networking sequential caching
在线阅读 下载PDF
Mobility-Aware Edge Caching with Transformer-DQN in D2D-Enabled Heterogeneous Networks
14
作者 Yiming Guo Hongyu Ma 《Computers, Materials & Continua》 2025年第11期3485-3505,共21页
In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic natu... In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic nature of user locations and the potential of device-to-device(D2D)cooperative caching,limiting the reduction of transmission latency.To address this issue,this paper proposes a joint optimization scheme for edge caching that integrates user mobility prediction with deep reinforcement learning.First,a Transformer-based geolocation prediction model is designed,leveraging multi-head attention mechanisms to capture correlations in historical user trajectories for accurate future location prediction.Then,within a three-tier heterogeneous network,we formulate a latency minimization problem under a D2D cooperative caching architecture and develop a mobility-aware Deep Q-Network(DQN)caching strategy.This strategy takes predicted location information as state input and dynamically adjusts the content distribution across small base stations(SBSs)andmobile users(MUs)to reduce end-to-end delay inmulti-hop content retrieval.Simulation results show that the proposed DQN-based method outperforms other baseline strategies across variousmetrics,achieving a 17.2%reduction in transmission delay compared to DQNmethods withoutmobility integration,thus validating the effectiveness of the joint optimization of location prediction and caching decisions. 展开更多
关键词 Mobile edge caching D2D heterogeneous networks deep reinforcement learning transformer model transmission delay optimization
在线阅读 下载PDF
R-Memcached: A Reliable In-Memory Cache for Big Key-Value Stores
15
作者 Chengjian Liu Kai Ouyang +2 位作者 Xiaowen Chu Hai Liu Yiu-Wing Leung 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2015年第6期560-573,共14页
Large-scale key-value stores are widely used in many Web-based systems to store huge amount of data as(key, value) pairs. In order to reduce the latency of accessing such(key, value) pairs, an in-memory cache system i... Large-scale key-value stores are widely used in many Web-based systems to store huge amount of data as(key, value) pairs. In order to reduce the latency of accessing such(key, value) pairs, an in-memory cache system is usually deployed between the front-end Web system and the back-end database system. In practice, a cache system may consist of a number of server nodes, and fault tolerance is a critical feature to maintain the latency Service-Level Agreements(SLAs). In this paper, we present the design, implementation, analysis, and evaluation of R-Memcached, a reliable in-memory key-value cache system that is built on top of the popular Memcached software. R-Memcached exploits coding techniques to achieve reliability, and can tolerate up to two node failures.Our experimental results show that R-Memcached can maintain very good latency and throughput performance even during the period of node failures. 展开更多
关键词 in-memory cache fault tolerance key-value store
原文传递
Caché数据库中数据的存储及其查询优化 被引量:2
16
作者 牛彩云 王建林 +1 位作者 光奇 樊睿 《信息技术与信息化》 2024年第1期17-21,共5页
Caché数据库的多维数据模型可以存储丰富的数据,在处理复杂的医疗数据时减少了表连接等处理过程,从而使多维数组能更快地存取数据。与主流的Oracle和SQL server等关系型数库相比,Caché主要在其存储结构上有很大的不同,Cach... Caché数据库的多维数据模型可以存储丰富的数据,在处理复杂的医疗数据时减少了表连接等处理过程,从而使多维数组能更快地存取数据。与主流的Oracle和SQL server等关系型数库相比,Caché主要在其存储结构上有很大的不同,Caché主要是以Global的形式存储数据,依据M语言开发应用程序。首先,介绍了Caché数据库中数据的存储形式;然后,展示了在医院HIS系统应用过程中Caché数据库中数据查询的几种方式及应用场合;最后,总结Caché数据库中SQL优化的几种办法。结果表明,Caché数据库具有更高的灵活性,适用于多种应用场合,而且在采用优化的查询方案后查询效率提高了很多倍。 展开更多
关键词 Caché数据库 多维数据模型 查询优化 SQL语句 数据存储
在线阅读 下载PDF
辣椒查尔酮合成酶基因CaCHS02的克隆及功能分析 被引量:1
17
作者 王小迪 李宁 +5 位作者 高升华 尹延旭 徐凯 詹晓慧 姚明华 王飞 《辣椒杂志》 2024年第3期1-10,共10页
查尔酮合成酶(Chalcone synthase,CHS)基因在植物黄酮类物质代谢过程中发挥重要作用。为研究查尔酮合成酶基因CaCHS02在辣椒(Capsicum annuum L.)中辣椒类黄酮代谢过程中的功能,本研究分析了CaCHS02基因的基因特征、蛋白特点和基因表达... 查尔酮合成酶(Chalcone synthase,CHS)基因在植物黄酮类物质代谢过程中发挥重要作用。为研究查尔酮合成酶基因CaCHS02在辣椒(Capsicum annuum L.)中辣椒类黄酮代谢过程中的功能,本研究分析了CaCHS02基因的基因特征、蛋白特点和基因表达模式,并构建了CaCHS02过表达载体,通过农杆菌介导法获得瞬时过表达CaCHS02(35S:CaCHS02)的辣椒植株。结果发现,瞬时过表达CaCHS02的辣椒植株叶片中CaCHS02发生显著超量表达,其中编码类黄酮代谢途径中其他关键酶基因(CaCHS02、CaPAL、CaC4H、Ca4CL、CaCHI、CaFLS和CaF3H)的表达水平同步显著上调;超表达CaCHS02促进辣椒叶片中查尔酮合成酶酶活性、总黄酮含量和辣椒叶片的α-葡糖糖苷酶抑制活性的提升。研究表明,CaCHS02在辣椒类黄酮代谢过程中发挥正向调控功能,过表达CaCHS02提高了辣椒叶片的α-葡糖糖苷酶抑制活性。本研究为解析辣椒α-葡萄糖苷酶抑制剂生物合成机制奠定了基础,为选育高α-葡糖糖苷酶抑制活性的功能辣椒品种提供了理论支持。 展开更多
关键词 辣椒 查尔酮合成酶 CaCHS02 类黄酮 α-葡糖糖苷酶抑制活性
在线阅读 下载PDF
多核处理器共享Cache的划分算法 被引量:1
18
作者 吕海玉 罗广 +1 位作者 朱嘉炜 张凤登 《电子科技》 2024年第9期27-33,共7页
针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立... 针对多核处理器性能优化问题,文中深入研究多核处理器上共享Cache的管理策略,提出了基于缓存时间公平性与吞吐率的共享Cache划分算法MT-FTP(Memory Time based Fair and Throughput Partitioning)。以公平性和吞吐率两个评价性指标建立数学模型,并分析了算法的划分流程。仿真实验结果表明,MT-FTP算法在系统吞吐率方面表现较好,其平均IPC(Instructions Per Cycles)值比UCP(Use Case Point)算法高1.3%,比LRU(Least Recently Used)算法高11.6%。MT-FTP算法对应的系统平均公平性比LRU算法的系统平均公平性高17%,比UCP算法的平均公平性高16.5%。该算法实现了共享Cache划分公平性并兼顾了系统的吞吐率。 展开更多
关键词 片上多核处理器 内存墙 划分 公平性 吞吐率 共享CACHE 缓存时间 集成计算机
在线阅读 下载PDF
Multilayer Satellite Network Collaborative Mobile Edge Caching:A GCN-Based Multi-Agent Approach 被引量:1
19
作者 Yang Jie He Jingchao +4 位作者 Cheng Nan Yin Zhisheng Han Dairu Zhou Conghao Sun Ruijin 《China Communications》 SCIE CSCD 2024年第11期56-74,共19页
With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also... With the explosive growth of highdefinition video streaming data,a substantial increase in network traffic has ensued.The emergency of mobile edge caching(MEC)can not only alleviate the burden on core network,but also significantly improve user experience.Integrating with the MEC and satellite networks,the network is empowered popular content ubiquitously and seamlessly.Addressing the research gap between multilayer satellite networks and MEC,we study the caching placement problem in this paper.Initially,we introduce a three-layer distributed network caching management architecture designed for efficient and flexible handling of large-scale networks.Considering the constraint on satellite capacity and content propagation delay,the cache placement problem is then formulated and transformed into a markov decision process(MDP),where the content coded caching mechanism is utilized to promote the efficiency of content delivery.Furthermore,a new generic metric,content delivery cost,is proposed to elaborate the performance of caching decision in large-scale networks.Then,we introduce a graph convolutional network(GCN)-based multi-agent advantage actor-critic(A2C)algorithm to optimize the caching decision.Finally,extensive simulations are conducted to evaluate the proposed algorithm in terms of content delivery cost and transferability. 展开更多
关键词 cache placement coded caching graph convolutional network(GCN) mobile edge caching(MEC) multilayer satellite network
在线阅读 下载PDF
Deep Reinforcement Learning-Based Task Offloading and Service Migrating Policies in Service Caching-Assisted Mobile Edge Computing 被引量:1
20
作者 Ke Hongchang Wang Hui +1 位作者 Sun Hongbin Halvin Yang 《China Communications》 SCIE CSCD 2024年第4期88-103,共16页
Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.... Emerging mobile edge computing(MEC)is considered a feasible solution for offloading the computation-intensive request tasks generated from mobile wireless equipment(MWE)with limited computational resources and energy.Due to the homogeneity of request tasks from one MWE during a longterm time period,it is vital to predeploy the particular service cachings required by the request tasks at the MEC server.In this paper,we model a service caching-assisted MEC framework that takes into account the constraint on the number of service cachings hosted by each edge server and the migration of request tasks from the current edge server to another edge server with service caching required by tasks.Furthermore,we propose a multiagent deep reinforcement learning-based computation offloading and task migrating decision-making scheme(MBOMS)to minimize the long-term average weighted cost.The proposed MBOMS can learn the near-optimal offloading and migrating decision-making policy by centralized training and decentralized execution.Systematic and comprehensive simulation results reveal that our proposed MBOMS can converge well after training and outperforms the other five baseline algorithms. 展开更多
关键词 deep reinforcement learning mobile edge computing service caching service migrating
在线阅读 下载PDF
上一页 1 2 71 下一页 到第
使用帮助 返回顶部