期刊文献+
共找到4篇文章
< 1 >
每页显示 20 50 100
Multi-source heterogeneous data access management framework and key technologies for electric power Internet of Things 被引量:1
1
作者 Pengtian Guo Kai Xiao +1 位作者 Xiaohui Wang Daoxing Li 《Global Energy Interconnection》 EI CSCD 2024年第1期94-105,共12页
The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initiall... The power Internet of Things(IoT)is a significant trend in technology and a requirement for national strategic development.With the deepening digital transformation of the power grid,China’s power system has initially built a power IoT architecture comprising a perception,network,and platform application layer.However,owing to the structural complexity of the power system,the construction of the power IoT continues to face problems such as complex access management of massive heterogeneous equipment,diverse IoT protocol access methods,high concurrency of network communications,and weak data security protection.To address these issues,this study optimizes the existing architecture of the power IoT and designs an integrated management framework for the access of multi-source heterogeneous data in the power IoT,comprising cloud,pipe,edge,and terminal parts.It further reviews and analyzes the key technologies involved in the power IoT,such as the unified management of the physical model,high concurrent access,multi-protocol access,multi-source heterogeneous data storage management,and data security control,to provide a more flexible,efficient,secure,and easy-to-use solution for multi-source heterogeneous data access in the power IoT. 展开更多
关键词 Power Internet of Things Object model High concurrency access Zero trust mechanism Multi-source heterogeneous data
在线阅读 下载PDF
Reevaluating Data Stall Time with the Consideration of Data Access Concurrency
2
作者 刘宇航 孙贤和 《Journal of Computer Science & Technology》 SCIE EI CSCD 2015年第2期227-245,共19页
Data access delay has become the prominent performance bottleneck of high-end computing systems. The key to reducing data access delay in system design is to diminish data stall time. Memory locality and concurrency a... Data access delay has become the prominent performance bottleneck of high-end computing systems. The key to reducing data access delay in system design is to diminish data stall time. Memory locality and concurrency are the two essential factors influencing the performance of modern memory systems. However, existing studies in reducing data stall time rarely focus on utilizing data access concurrency because the impact of memory concurrency on overall memory system performance is not well understood. In this study, a pair of novel data stall time models, the L-C model for the combined effort of locality and concurrency and the P-M model for the effect of pure miss on data stall time, are presented. The models provide a new understanding of data access delay and provide new directions for performance optimization. Based on these new models, a summary table of advanced cache optimizations is presented. It has 38 entries contributed by data concurrency while only has 21 entries contributed by data locality, which shows the value of data concurrency. The L-C and P-M models and their associated results and opportunities introduced in this study are important and necessary for future data-centric architecture and algorithm design of modern computing systems. 展开更多
关键词 memory wall data stall time memory concurrency concurrent average memory access time (C-AMAT)
原文传递
A Study on Modeling and Optimization of Memory Systems
3
作者 Jason Liu Pedro Espina Xian-He Sun 《Journal of Computer Science & Technology》 SCIE EI CSCD 2021年第1期71-89,共19页
Accesses Per Cycle(APC),Concurrent Average Memory Access Time(C-AMAT),and Layered Performance Matching(LPM)are three memory performance models that consider both data locality and memory assess concurrency.The APC mod... Accesses Per Cycle(APC),Concurrent Average Memory Access Time(C-AMAT),and Layered Performance Matching(LPM)are three memory performance models that consider both data locality and memory assess concurrency.The APC model measures the throughput of a memory architecture and therefore reflects the quality of service(QoS)of a memory system.The C-AMAT model provides a recursive expression for the memory access delay and therefore can be used for identifying the potential bottlenecks in a memory hierarchy.The LPM method transforms a global memory system optimization into localized optimizations at each memory layer by matching the data access demands of the applications with the underlying memory system design.These three models have been proposed separately through prior efforts.This paper reexamines the three models under one coherent mathematical framework.More specifically,we present a new memorycentric view of data accesses.We divide the memory cycles at each memory layer into four distinct categories and use them to recursively define the memory access latency and concurrency along the memory hierarchy.This new perspective offers new insights with a clear formulation of the memory performance considering both locality and concurrency.Consequently,the performance model can be easily understood and applied in engineering practices.As such,the memory-centric approach helps establish a unified mathematical foundation for model-driven performance analysis and optimization of contemporary and future memory systems. 展开更多
关键词 performance modeling performance optimization memory architecture memory hierarchy concurrent average memory access time
原文传递
Virtualize and share non‑volatile memories in user space
4
作者 Chih Chieh Chou Jaemin Jung +2 位作者 A.L.Narasimha Reddy Paul V.Gratz Doug Voigt 《CCF Transactions on High Performance Computing》 2020年第1期16-35,共20页
Emerging non-volatile memory(NVM)has attractive characteristics such as DRAM-like low-latency together with the non-volatility of storage devices.Recently,byte-addressable,memory bus-attached NVM has become available.... Emerging non-volatile memory(NVM)has attractive characteristics such as DRAM-like low-latency together with the non-volatility of storage devices.Recently,byte-addressable,memory bus-attached NVM has become available.This paper addresses the problem of combining a smaller,faster byte-addressable NVM with a larger,slower storage device,such as SSD,to create the impression of a larger and faster byte-addressable NVM which can be shared across multiple applications concurrently.In this paper,we propose vNVML,a user space library for virtualizing and sharing NVM.vNVML provides for applications transaction-like memory semantics that ensures write ordering,durability,and persistency guarantees across system failures.vNVML exploits DRAM for read caching to improve performance and potentially to reduce the number of writes to NVM,extending the NVM lifetime.vNVML is implemented in C and evaluated with realistic workloads to show that vNVML allows applications to share NVM efficiently,both in a single OS and when docker-like containers are employed.The results from the evaluation show that vNVML incurs less than 10%overhead while providing the benefits of an expanded virtualized NVM space to the applications,and allowing applications to safely share the virtual NVM. 展开更多
关键词 Non-volatile memory User space library Virtualization Transactional semantics concurrent accesses
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部