期刊文献+
共找到1,417篇文章
< 1 2 71 >
每页显示 20 50 100
Age-Optimal Cached Distribution in the Satellite-Integrated Internet of Things via Cross-Slot Directed Graph
1
作者 Hu Zhouyong Li Yue +1 位作者 Zhang Hanxu Yang Zhihua 《China Communications》 2025年第6期300-318,共19页
In the Satellite-integrated Internet of Things(S-IoT),data freshness in the time-sensitive scenarios could not be guaranteed over the timevarying topology with current distribution strategies aiming to reduce the tran... In the Satellite-integrated Internet of Things(S-IoT),data freshness in the time-sensitive scenarios could not be guaranteed over the timevarying topology with current distribution strategies aiming to reduce the transmission delay.To address this problem,in this paper,we propose an age-optimal caching distribution mechanism for the high-timeliness data collection in S-IoT by adopting a freshness metric,as called age of information(AoI)through the caching-based single-source multidestinations(SSMDs)transmission,namely Multi-AoI,with a well-designed cross-slot directed graph(CSG).With the proposed CSG,we make optimizations on the locations of cache nodes by solving a nonlinear integer programming problem on minimizing Multi-AoI.In particular,we put up forward three specific algorithms respectively for improving the Multi-AoI,i.e.,the minimum queuing delay algorithm(MQDA)based on node deviation from average level,the minimum propagation delay algorithm(MPDA)based on the node propagation delay reduction,and a delay balanced algorithm(DBA)based on node deviation from average level and propagation delay reduction.The simulation results show that the proposed mechanism can effectively improve the freshness of information compared with the random selection algorithm. 展开更多
关键词 age of information cached distribution satellite-integrated internet of things time-varying graph
在线阅读 下载PDF
Memcached分布式缓存系统的应用 被引量:2
2
作者 常广炎 《电脑编程技巧与维护》 2017年第7期24-25,共2页
Memcached是一个高性能的分布式内存对象缓存系统,常用于动态Web应用以减轻数据库负载,它通过在内存缓存数据和对象来减少读取数据库的次数,从而减轻数据库的访问负载,加快了网站响应速度,提高系统的查询性能,从而使分布式系统不必考虑... Memcached是一个高性能的分布式内存对象缓存系统,常用于动态Web应用以减轻数据库负载,它通过在内存缓存数据和对象来减少读取数据库的次数,从而减轻数据库的访问负载,加快了网站响应速度,提高系统的查询性能,从而使分布式系统不必考虑数据缓存的问题,具有更高的可扩展性。 展开更多
关键词 分布式缓存 CACHE技术 数据库
在线阅读 下载PDF
A Multi-Objective Deep Reinforcement Learning Algorithm for Computation Offloading in Internet of Vehicles
3
作者 Junjun Ren Guoqiang Chen +1 位作者 Zheng-Yi Chai Dong Yuan 《Computers, Materials & Continua》 2026年第1期2111-2136,共26页
Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrain... Vehicle Edge Computing(VEC)and Cloud Computing(CC)significantly enhance the processing efficiency of delay-sensitive and computation-intensive applications by offloading compute-intensive tasks from resource-constrained onboard devices to nearby Roadside Unit(RSU),thereby achieving lower delay and energy consumption.However,due to the limited storage capacity and energy budget of RSUs,it is challenging to meet the demands of the highly dynamic Internet of Vehicles(IoV)environment.Therefore,determining reasonable service caching and computation offloading strategies is crucial.To address this,this paper proposes a joint service caching scheme for cloud-edge collaborative IoV computation offloading.By modeling the dynamic optimization problem using Markov Decision Processes(MDP),the scheme jointly optimizes task delay,energy consumption,load balancing,and privacy entropy to achieve better quality of service.Additionally,a dynamic adaptive multi-objective deep reinforcement learning algorithm is proposed.Each Double Deep Q-Network(DDQN)agent obtains rewards for different objectives based on distinct reward functions and dynamically updates the objective weights by learning the value changes between objectives using Radial Basis Function Networks(RBFN),thereby efficiently approximating the Pareto-optimal decisions for multiple objectives.Extensive experiments demonstrate that the proposed algorithm can better coordinate the three-tier computing resources of cloud,edge,and vehicles.Compared to existing algorithms,the proposed method reduces task delay and energy consumption by 10.64%and 5.1%,respectively. 展开更多
关键词 Deep reinforcement learning internet of vehicles multi-objective optimization cloud-edge computing computation offloading service caching
在线阅读 下载PDF
R-Memcached: A Reliable In-Memory Cache for Big Key-Value Stores
4
作者 Chengjian Liu Kai Ouyang +2 位作者 Xiaowen Chu Hai Liu Yiu-Wing Leung 《Tsinghua Science and Technology》 SCIE EI CAS CSCD 2015年第6期560-573,共14页
Large-scale key-value stores are widely used in many Web-based systems to store huge amount of data as(key, value) pairs. In order to reduce the latency of accessing such(key, value) pairs, an in-memory cache system i... Large-scale key-value stores are widely used in many Web-based systems to store huge amount of data as(key, value) pairs. In order to reduce the latency of accessing such(key, value) pairs, an in-memory cache system is usually deployed between the front-end Web system and the back-end database system. In practice, a cache system may consist of a number of server nodes, and fault tolerance is a critical feature to maintain the latency Service-Level Agreements(SLAs). In this paper, we present the design, implementation, analysis, and evaluation of R-Memcached, a reliable in-memory key-value cache system that is built on top of the popular Memcached software. R-Memcached exploits coding techniques to achieve reliability, and can tolerate up to two node failures.Our experimental results show that R-Memcached can maintain very good latency and throughput performance even during the period of node failures. 展开更多
关键词 in-memory cache fault tolerance key-value store
原文传递
符合粒子输运模拟的专用加速器体系结构 被引量:1
5
作者 张建民 刘津津 +1 位作者 许炜康 黎铁军 《国防科技大学学报》 北大核心 2025年第2期155-164,共10页
粒子输运模拟是高性能计算机的主要应用,对于其日益增长的计算规模需求,通用微处理器由于其单核结构复杂,无法适应程序特征,难以获得较高的性能功耗比。因此,对求解粒子输运非确定性数值模拟的程序特征进行提取与分析;基于算法特征,对... 粒子输运模拟是高性能计算机的主要应用,对于其日益增长的计算规模需求,通用微处理器由于其单核结构复杂,无法适应程序特征,难以获得较高的性能功耗比。因此,对求解粒子输运非确定性数值模拟的程序特征进行提取与分析;基于算法特征,对开源微处理器内核架构进行定制设计,包括加速器流水线结构、分支预测部件、多级Cache层次与主存设计,构建一种符合粒子输运程序特征的专用加速器体系结构。在业界通用体系结构模拟器上运行粒子输运程序的模拟结果表明,与ARM Cortex-A15相比,所提出的专用加速器体系结构在同等功耗下可获得4.6倍的性能提升,在同等面积下可获得3.2倍的性能提升。 展开更多
关键词 粒子输运模拟 专用加速器 程序特征 分支预测 多级Cache
在线阅读 下载PDF
一种高性能PCIe接口设计与实现
6
作者 张梅娟 辛昆鹏 周迁 《现代电子技术》 北大核心 2025年第8期70-74,共5页
多款处理器在PCIe 2.0×4下传输速率不足理论带宽的20%,最高仅有380 MB/s,不能满足实际应用需求。为解决嵌入式处理器PCIe接口传输速率过低的问题,设计一款高性能PCIe接口,有效提高了接口数据传输速率。经性能瓶颈系统分析,增加设计... 多款处理器在PCIe 2.0×4下传输速率不足理论带宽的20%,最高仅有380 MB/s,不能满足实际应用需求。为解决嵌入式处理器PCIe接口传输速率过低的问题,设计一款高性能PCIe接口,有效提高了接口数据传输速率。经性能瓶颈系统分析,增加设计PCIe DMA与处理器Cache一致性功能,能解决DMA传输完成后软件Cache同步耗时严重的问题,使速率提升3.8倍,达到1 450 MB/s。在硬件设计上DMA支持链表模式,通过描述符链表将分散的内存集聚起来,一次DMA启动可完成多个非连续地址内存的数据传输,并优化与改进软件驱动中分散集聚DMA实现方式,充分利用硬件Cache一致性功能,进一步提升10%的传输速率,最终达到PCIe 2.0×4理论带宽的80%。此外,该PCIe接口采用多通道DMA的设计,最大支持8路独立DMA读写通道,可应用于多核多任务并行传输数据的应用场景,更进一步提升整体数据传输带宽。经验证,该PCIe接口具有良好的稳定性和高效性,最大可支持8通道数据并行传输,且单通道传输速率可达到理论速率的80%。 展开更多
关键词 PCIe接口 DMA控制器 高速数据传输 CACHE一致性 多通道设计 分散集聚 链表模式
在线阅读 下载PDF
面向高性能DSP的一级可配置指令缓存设计与验证
7
作者 唐俊龙 高睿禧 《集成电路与嵌入式系统》 2025年第5期24-34,共11页
针对程序运行中Cache无法有效预测非局部访问的问题,提出了一种基于二级存储结构的高安全性一级可配置指令缓存设计方案。该方案通过页与Cache行的两种粒度存储保护机制,确保不同级别用户的数据安全;实现了内部控制寄存器和灵活可配置的... 针对程序运行中Cache无法有效预测非局部访问的问题,提出了一种基于二级存储结构的高安全性一级可配置指令缓存设计方案。该方案通过页与Cache行的两种粒度存储保护机制,确保不同级别用户的数据安全;实现了内部控制寄存器和灵活可配置的Cache/SRAM结构,支持快速配置和扩展;利用直接存储访问模块实现了与外部存储的高效交互。通过UVM平台进行模块级验证,并对比不同L1P大小配置下的命中率,调用40 nm低阈值库验证了系统的时延和功耗性能。实验结果表明,所设计的缓存方案能在32 KB至0 KB五种L1P配置间快速切换,满足600 MHz高性能DSP的需求,最大路径延时为1.47 ns,总功耗为309.577 mW。 展开更多
关键词 一级指令缓存 UVM验证学 存储保护 DSP CACHE
在线阅读 下载PDF
一种嵌入式微控制器的指令Cache设计方案
8
作者 王睿 张艳花 《电子制作》 2025年第4期22-25,共4页
在传统的指令Cache设计方案中,VTag存储器大小取决于处理器的总线宽度以及指令Cache的Cache行数和Cache行大小。本文提出了一种优化指令Cache的电路设计,通过减小指令Cache覆盖的取指空间,在不影响指令Cache对外接口的总线宽度以及Cach... 在传统的指令Cache设计方案中,VTag存储器大小取决于处理器的总线宽度以及指令Cache的Cache行数和Cache行大小。本文提出了一种优化指令Cache的电路设计,通过减小指令Cache覆盖的取指空间,在不影响指令Cache对外接口的总线宽度以及Cache行数和Cache大小的情况下,进一步减小了VTag存储器的大小。本文用DC(Design Compiler)对指令Cache进行综合,结果表明,本文设计实现的指令Cache较传统方案在减少芯片面积的同时,显著提升了电路频率,对嵌入式微控制器的指令Cache设计具有重要的实用价值。 展开更多
关键词 嵌入式 微控制器 处理器 指令CACHE
在线阅读 下载PDF
Rubyphi:面向gem5的Cache一致性协议自动化模型检验
9
作者 徐学政 方健 +4 位作者 梁少杰 王璐 黄安文 隋京高 李琼 《计算机工程与科学》 北大核心 2025年第7期1141-1151,共11页
Cache一致性协议是多核系统数据一致性的保障,也直接影响内存子系统的性能,一直是微处理器设计和验证的重点。Cache一致性协议的设计和优化通常需借助gem5等软件模拟器快速实现。同时,由于协议设计中存在的错误在仿真测试中具有难触发... Cache一致性协议是多核系统数据一致性的保障,也直接影响内存子系统的性能,一直是微处理器设计和验证的重点。Cache一致性协议的设计和优化通常需借助gem5等软件模拟器快速实现。同时,由于协议设计中存在的错误在仿真测试中具有难触发、难定位和难修复的特点,需借助Murphi等模型检验工具进行形式化验证。然而,基于模拟器的协议设计优化和基于模型检验的协议验证在编程语言和抽象层次上存在巨大差异,设计者需要分别进行模拟器实现和模型检验建模,这不仅增加了时间成本,也为二者的等价性带来了隐患。设计并实现了面向gem5模拟器的Cache一致性协议自动化模型检验方法Rubyphi,通过提取gem5中实现的协议,自动完成基于Murphi的模型检验建模,进而对协议进行形式化验证。实验表明,Rubyphi能够有效地完成gem5中一致性协议的建模和验证,并成功发现了2个gem5现有协议中存在的错误,相关问题和解决方案已得到社区确认。 展开更多
关键词 CACHE一致性协议 多核处理器 模型检验 形式化验证
在线阅读 下载PDF
FlyCache:Recommendation-driven edge caching architecture for full life cycle of video streaming
10
作者 Shaohua Cao Quancheng Zheng +4 位作者 Zijun Zhan Yansheng Yang Huaqi Lv Danyang Zheng Weishan Zhang 《Digital Communications and Networks》 2025年第4期961-973,共13页
With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for opti... With the rapid development of 5G technology,the proportion of video traffic on the Internet is increasing,bringing pressure on the network infrastructure.Edge computing technology provides a feasible solution for optimizing video content distribution.However,the limited edge node cache capacity and dynamic user requests make edge caching more complex.Therefore,we propose a recommendation-driven edge Caching network architecture for the Full life cycle of video streaming(FlyCache)designed to improve users’Quality of Experience(QoE)and reduce backhaul traffic consumption.FlyCache implements intelligent caching management across three key stages:before-playback,during-playback,and after-playback.Specifically,we introduce a cache placement policy for the before-playback stage,a dynamic prefetching and cache admission policy for the during-playback stage,and a progressive cache eviction policy for the after-playback stage.To validate the effectiveness of FlyCache,we developed a user behavior-driven edge caching simulation framework incorporating recommendation mechanisms.Experiments conducted on the MovieLens and synthetic datasets demonstrate that FlyCache outperforms other caching strategies in terms of byte hit rate,backhaul traffic,and delayed startup rate. 展开更多
关键词 Edge caching Cache architecture Cache placement Cache admission Caching eviction
在线阅读 下载PDF
Resource Allocation of UAV-Assisted Mobile Edge Computing Systems with Caching
11
作者 Pu Dan Feng Wenjiang Zhang Juntao 《China Communications》 2025年第10期269-279,共11页
In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope wit... In this paper,unmanned aerial vehicle(UAV)is adopted to serve as aerial base station(ABS)and mobile edge computing(MEC)platform for wire-less communication systems.When Internet of Things devices(IoTDs)cannot cope with computation-intensive and/or time-sensitive tasks,part of tasks is offloaded to the UAV side,and UAV process them with its own computing resources and caching resources.Thus,the burden of IoTDs gets relieved under the satisfaction of the quality of service(QoS)require-ments.However,owing to the limited resources of UAV,the cost of whole system,i.e.,that is defined as the weighted sum of energy consumption and time de-lay with caching,should be further optimized while the objective function and the constraints are non-convex.Therefore,we first jointly optimize commu-nication resources B,computing resources F and of-floading rates X with alternating iteration and convex optimization method,and then determine the value of caching decision Y with branch-and-bound(BB)al-gorithm.Numerical results show that UAV assisting partial task offloading with content caching is supe-rior to local computing and full offloading mechanism without caching,and meanwhile the cost of whole sys-tem gets further optimized with our proposed scheme. 展开更多
关键词 CACHING MEC resource allocation UAV
在线阅读 下载PDF
Utility-Driven Edge Caching Optimization with Deep Reinforcement Learning under Uncertain Content Popularity
12
作者 Mingoo Kwon Kyeongmin Kim Minseok Song 《Computers, Materials & Continua》 2025年第10期519-537,共19页
Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the... Efficient edge caching is essential for maximizing utility in video streaming systems,especially under constraints such as limited storage capacity and dynamically fluctuating content popularity.Utility,defined as the benefit obtained per unit of cache bandwidth usage,degrades when static or greedy caching strategies fail to adapt to changing demand patterns.To address this,we propose a deep reinforcement learning(DRL)-based caching framework built upon the proximal policy optimization(PPO)algorithm.Our approach formulates edge caching as a sequential decision-making problem and introduces a reward model that balances cache hit performance and utility by prioritizing high-demand,high-quality content while penalizing degraded quality delivery.We construct a realistic synthetic dataset that captures both temporal variations and shifting content popularity to validate our model.Experimental results demonstrate that our proposed method improves utility by up to 135.9%and achieves an average improvement of 22.6%compared to traditional greedy algorithms and long short-term memory(LSTM)-based prediction models.Moreover,our method consistently performs well across a variety of utility functions,workload distributions,and storage limitations,underscoring its adaptability and robustness in dynamic video caching environments. 展开更多
关键词 Edge caching video-on-demand reinforcement learning utility optimization
在线阅读 下载PDF
An Efficient Content Caching Strategy for Fog-Enabled Road Side Units in Vehicular Networks
13
作者 Faareh Ahmed Babar Mansoor +1 位作者 Muhammad Awais Javed Abdul Khader Jilani Saudagar 《Computer Modeling in Engineering & Sciences》 2025年第9期3783-3804,共22页
Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(... Vehicular networks enable seamless connectivity for exchanging emergency and infotainment content.However,retrieving infotainment data from remote servers often introduces high delays,degrading the Quality of Service(QoS).To overcome this,caching frequently requested content at fog-enabled Road Side Units(RSUs)reduces communication latency.Yet,the limited caching capacity of RSUs makes it impractical to store all contents with varying sizes and popularity.This research proposes an efficient content caching algorithm that adapts to dynamic vehicular demands on highways to maximize request satisfaction.The scheme is evaluated against Intelligent Content Caching(ICC)and Random Caching(RC).The obtained results show that our proposed scheme entertains more contentrequesting vehicles as compared to ICC and RC,with 33%and 41%more downloaded data in 28%and 35%less amount of time from ICC and RC schemes,respectively. 展开更多
关键词 Vehicular networks fog computing content caching infotainment services
在线阅读 下载PDF
A knowledge graph-based reinforcement learning approach for cooperative caching in MEC-enabled heterogeneous networks
14
作者 Dan Wang Yalu Bai Bin Song 《Digital Communications and Networks》 2025年第4期1236-1244,共9页
Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of conge... Existing wireless networks are flooded with video data transmissions,and the demand for high-speed and low-latency video services continues to surge.This has brought with it challenges to networks in the form of congestion as well as the need for more resources and more dedicated caching schemes.Recently,Multi-access Edge Computing(MEC)-enabled heterogeneous networks,which leverage edge caches for proximity delivery,have emerged as a promising solution to all of these problems.Designing an effective edge caching scheme is critical to its success,however,in the face of limited resources.We propose a novel Knowledge Graph(KG)-based Dueling Deep Q-Network(KG-DDQN)for cooperative caching in MEC-enabled heterogeneous networks.The KGDDQN scheme leverages a KG to uncover video relations,providing valuable insights into user preferences for the caching scheme.Specifically,the KG guides the selection of related videos as caching candidates(i.e.,actions in the DDQN),thus providing a rich reference for implementing a personalized caching scheme while also improving the decision efficiency of the DDQN.Extensive simulation results validate the convergence effectiveness of the KG-DDQN,and it also outperforms baselines regarding cache hit rate and service delay. 展开更多
关键词 Multi-access edge computing Cooperative caching Resource allocation Knowledge graph Reinforcement learning
在线阅读 下载PDF
Distributed service caching with deep reinforcement learning for sustainable edge computing in large-scale AI
15
作者 Wei Liu Muhammad Bilal +1 位作者 Yuzhe Shi Xiaolong Xu 《Digital Communications and Networks》 2025年第5期1447-1456,共10页
Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a r... Increasing reliance on large-scale AI models has led to rising demand for intelligent services.The centralized cloud computing approach has limitations in terms of data transfer efficiency and response time,and as a result many service providers have begun to deploy edge servers to cache intelligent services in order to reduce transmission delay and communication energy consumption.However,finding the optimal service caching strategy remains a significant challenge due to the stochastic nature of service requests and the bulky nature of intelligent services.To deal with this,we propose a distributed service caching scheme integrating deep reinforcement learning(DRL)with mobility prediction,which we refer to as DSDM.Specifically,we employ the D3QN(Deep Double Dueling Q-Network)framework to integrate Long Short-Term Memory(LSTM)predicted mobile device locations into the service caching replacement algorithm and adopt the distributed multi-agent approach for learning and training.Experimental results demonstrate that DSDM achieves significant performance improvements in reducing communication energy consumption compared to traditional methods across various scenarios. 展开更多
关键词 Intelligent service Edge caching Deep reinforcement learning Mobility prediction
在线阅读 下载PDF
A Hierarchical-Based Sequential Caching Scheme in Named Data Networking
16
作者 Zhang Junmin Jin Jihuan +3 位作者 Hou Rui Dong Mianxiong Kaoru Ota Zeng Deze 《China Communications》 2025年第5期48-60,共13页
Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently r... Named data networking(NDNs)is an idealized deployment of information-centric networking(ICN)that has attracted attention from scientists and scholars worldwide.A distributed in-network caching scheme can efficiently realize load balancing.However,such a ubiquitous caching approach may cause problems including duplicate caching and low data diversity,thus reducing the caching efficiency of NDN routers.To mitigate these caching problems and improve the NDN caching efficiency,in this paper,a hierarchical-based sequential caching(HSC)scheme is proposed.In this scheme,the NDN routers in the data transmission path are divided into various levels and data with different request frequencies are cached in distinct router levels.The aim is to cache data with high request frequencies in the router that is closest to the content requester to increase the response probability of the nearby data,improve the data caching efficiency of named data networks,shorten the response time,and reduce cache redundancy.Simulation results show that this scheme can effectively improve the cache hit rate(CHR)and reduce the average request delay(ARD)and average route hop(ARH). 展开更多
关键词 hierarchical router named data networking sequential caching
在线阅读 下载PDF
Mobility-Aware Edge Caching with Transformer-DQN in D2D-Enabled Heterogeneous Networks
17
作者 Yiming Guo Hongyu Ma 《Computers, Materials & Continua》 2025年第11期3485-3505,共21页
In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic natu... In dynamic 5G network environments,user mobility and heterogeneous network topologies pose dual challenges to the effort of improving performance of mobile edge caching.Existing studies often overlook the dynamic nature of user locations and the potential of device-to-device(D2D)cooperative caching,limiting the reduction of transmission latency.To address this issue,this paper proposes a joint optimization scheme for edge caching that integrates user mobility prediction with deep reinforcement learning.First,a Transformer-based geolocation prediction model is designed,leveraging multi-head attention mechanisms to capture correlations in historical user trajectories for accurate future location prediction.Then,within a three-tier heterogeneous network,we formulate a latency minimization problem under a D2D cooperative caching architecture and develop a mobility-aware Deep Q-Network(DQN)caching strategy.This strategy takes predicted location information as state input and dynamically adjusts the content distribution across small base stations(SBSs)andmobile users(MUs)to reduce end-to-end delay inmulti-hop content retrieval.Simulation results show that the proposed DQN-based method outperforms other baseline strategies across variousmetrics,achieving a 17.2%reduction in transmission delay compared to DQNmethods withoutmobility integration,thus validating the effectiveness of the joint optimization of location prediction and caching decisions. 展开更多
关键词 Mobile edge caching D2D heterogeneous networks deep reinforcement learning transformer model transmission delay optimization
在线阅读 下载PDF
内存数据库在高速缓存方面的应用 被引量:18
18
作者 杨艳 李炜 王纯 《现代电信科技》 2011年第12期59-64,共6页
随着Internet技术的不断发展,影响网络速度的瓶颈主要集中在访问距离和服务器承载负荷能力方面。扩展服务器或者镜像服务器作为基本解决方案在运营维护方面花费的代价较高,而Cache缓存技术作为一种补充方案,以其简单的设计、高效的存储... 随着Internet技术的不断发展,影响网络速度的瓶颈主要集中在访问距离和服务器承载负荷能力方面。扩展服务器或者镜像服务器作为基本解决方案在运营维护方面花费的代价较高,而Cache缓存技术作为一种补充方案,以其简单的设计、高效的存储性能得到了越来越广泛的应用。内存数据库作为缓存的一种补充方案,应用范围也在逐步扩展。文章针对两种内存数据库产品的特征进行了对比介绍,同时分析了他们的应用场景。 展开更多
关键词 CACHE 内存数据库 MEMcached REDIS 分布式 复制
在线阅读 下载PDF
多核多线程技术综述 被引量:47
19
作者 眭俊华 刘慧娜 +1 位作者 王建鑫 秦庆旺 《计算机应用》 CSCD 北大核心 2013年第A01期239-242,261,共5页
分析了多核CPU和操作系统、并行计算以及多线程设计与开发之间的关系,结合一个新的性能评估算法,从线程并行数量、数据竞争、锁竞争、线程安全、数据传输、存储一致性等方面,详细分析了多核多线程开发中开发技术和存在的问题,并给出了... 分析了多核CPU和操作系统、并行计算以及多线程设计与开发之间的关系,结合一个新的性能评估算法,从线程并行数量、数据竞争、锁竞争、线程安全、数据传输、存储一致性等方面,详细分析了多核多线程开发中开发技术和存在的问题,并给出了对应的措施,最后简要论述和分析了多核多线程技术的发展趋势。 展开更多
关键词 多核CPU 多线程 任务调度 数据共享 锁竞争 线程安全 cache存储一致性
在线阅读 下载PDF
AES访问驱动Cache计时攻击 被引量:16
20
作者 赵新杰 王韬 +1 位作者 郭世泽 郑媛媛 《软件学报》 EI CSCD 北大核心 2011年第3期572-591,共20页
首先给出了访问驱动Cache计时攻击的模型,提出了该模型下直接分析、排除分析两种通用的AES加密泄漏Cache信息分析方法;然后建立了AES加密Cache信息泄露模型,并在此基础上对排除分析攻击所需样本量进行了定量分析,给出了攻击中可能遇到... 首先给出了访问驱动Cache计时攻击的模型,提出了该模型下直接分析、排除分析两种通用的AES加密泄漏Cache信息分析方法;然后建立了AES加密Cache信息泄露模型,并在此基础上对排除分析攻击所需样本量进行了定量分析,给出了攻击中可能遇到问题的解决方案;最后结合OpenSSL v.0.9.8a,v.0.9.8j中两种典型的AES实现在Windows环境下进行了本地和远程攻击共12个实验.实验结果表明,访问驱动Cache计时攻击在本地和远程均具有良好的可行性;AES查找表和Cache结构本身决定了AES易遭受访问驱动Cache计时攻击威胁,攻击最小样本量仅为13;去除T4表的OpenSSL v.0.9.8j中AES最后一轮实现并不能防御该攻击;实验结果多次验证了AES加密Cache信息泄露和密钥分析理论的正确性. 展开更多
关键词 高级加密标准 访问驱动 CACHE计时攻击 远程攻击 OPENSSL
在线阅读 下载PDF
上一页 1 2 71 下一页 到第
使用帮助 返回顶部