期刊文献+
共找到5篇文章
< 1 >
每页显示 20 50 100
The MMU Implementation of Unity-1 Microprocessor 被引量:2
1
作者 宋传华 Cheng +2 位作者 Xu Zhu Dexin 《High Technology Letters》 EI CAS 2003年第4期27-32,共6页
Virtual memory management is always a very essential issue of the modern microprocessor design. A memory management unit (MMU) is designed to implement a virtual machine for user programs, and provides a management me... Virtual memory management is always a very essential issue of the modern microprocessor design. A memory management unit (MMU) is designed to implement a virtual machine for user programs, and provides a management mechanism between the operating system and user programs. This paper analyzes the tradeoffs considered in the MMU design of Unity 11 CPU of Peking University, and introduces in detail the solution of pure hardware table walking with two level page table organization. The implementation takes care of required operations and high performances needed by modern operating systems and low costs needed by embedded systems. This solution has been silicon proven, and successfully porting the Linux 2.4.17 kernel, the XWindow system, GNOME and most application software onto the Unity platform. 展开更多
关键词 Unity 1 MMU TLB table walking MICROPROCESSOR
在线阅读 下载PDF
具有关键路径检测功能的脉冲触发器电路及应用 被引量:1
2
作者 石瑞恺 王昊 +1 位作者 杨梁 章隆兵 《计算机辅助设计与图形学学报》 EI CSCD 北大核心 2019年第12期2197-2206,共10页
由于在实际生产和工作过程中受到多种复杂因素的影响,集成电路的关键路径会发生不确定的变化.这导致时序分析结果出现较大偏差,芯片的硅前-硅后一致性难以保证.为此,提出一种具备关键路径检测功能的脉冲触发器电路.该电路复用功能模式... 由于在实际生产和工作过程中受到多种复杂因素的影响,集成电路的关键路径会发生不确定的变化.这导致时序分析结果出现较大偏差,芯片的硅前-硅后一致性难以保证.为此,提出一种具备关键路径检测功能的脉冲触发器电路.该电路复用功能模式下的冗余锁存器作为影子锁存器,并在其数据输入端插入额外的传播延迟,使2个锁存器具有不同的建立时间裕量;再通过比较2个锁存器的采样值差异实现关键路径检测功能.HSPICE仿真结果表明,该电路实现了脉冲触发器的基本功能并且能够有效地检测出关键路径;与其他几种设计相比,该电路的面积开销较小且具备显著的功耗优化手段.最后,给出该电路的集成使用流程,使之应用到物理设计中. 展开更多
关键词 关键路径 时序检测 脉冲触发器 物理设计
在线阅读 下载PDF
UNI-SPEC:An Instruction Set Description Language 被引量:2
3
作者 朱德新 Cheng +2 位作者 Xu Song Chuanhua 《High Technology Letters》 EI CAS 2003年第4期33-38,共6页
Microprocessor development emphasizes hardware and software co design. Hw/Sw co design is a modern technique aimed at shortening the time to market in designing the real time and embedded systems. Key feature of this ... Microprocessor development emphasizes hardware and software co design. Hw/Sw co design is a modern technique aimed at shortening the time to market in designing the real time and embedded systems. Key feature of this approach is simultaneous development of the program tools and the target processor to match software application. An effective co design flow must therefore support automatic software toolkits generation, without loss of optimizing efficiency. This has resulted in a paradigm shift towards a language based design methodology for microprocessor optimization and exploration. This paper proposes a formal grammar, UNI SPEC, which supports the automatic generation of assemblers, to describe the translation rules from assembly to binary. Based on UNI SPEC, it implements two typical applications, i.e., automatically generating the assembler and the test suites. 展开更多
关键词 formal grammar retargetable assembler generator instruction set architecture
在线阅读 下载PDF
DLPlib: A Library for Deep Learning Processor 被引量:5
4
作者 Hui-Ying Lan Lin-Yang Wu +6 位作者 Xiao Zhang Jin-Hua Tao Xun-Yu Chen Bing-Rui Wang Yu-Qing Wang Qi Guo Yun-Ji Chen 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第2期286-296,共11页
Recently, deep learning processors have become one of the most promising solutions of accelerating deep learning algorithms. Currently, the only method of programming the deep learning processors is through writing as... Recently, deep learning processors have become one of the most promising solutions of accelerating deep learning algorithms. Currently, the only method of programming the deep learning processors is through writing assembly instructions by bare hands, which costs a lot of programming efforts and causes very low efficiency. One solution is to integrate the deep learning processors as a new back-end into one prevalent high-level deep learning framework (e.g., TPU (tensor processing unit) is integrated into Tensorflow directly). However, this will obstruct other frameworks to profit from the programming interface, The alternative approach is to design a framework-independent low-level library for deep learning processors (e.g., the deep learning library for GPU, cuDNN). In this fashion, the library could be conveniently invoked in high-level programming frameworks and provides more generality. In order to allow more deep learning frameworks to gain benefits from this environment, we envision it as a low-level library which could be easily embedded into current high-level frameworks and provide high performance. Three major issues of designing such a library are discussed. The first one is the design of data structures. Data structures should be as few as possible while being able to support all possible operations. This will allow us to optimize the data structures easier without compromising the generality. The second one is the selection of operations, which should provide a rather wide range of operations to support various types of networks with high efficiency. The third is the design of the API, which should provide a flexible and user-friendly programming model and should be easy to be embedded into existing deep learning frameworks. Considering all the above issues, we propose DLPIib, a tensor-filter based library designed specific for deep learning processors. It contains two major data structures, tensor and filter, and a set of operators including basic neural network primitives and matrix/vector operations. It provides a descriptor-based API exposed as a C++ interface. The library achieves a speedup of 0.79x compared with the performance of hand-written assembly instructions. 展开更多
关键词 deep learning processor API LIBRARY DLPlib
原文传递
Prevention from Soft Errors via Architecture Elasticity
5
作者 尹一笑 陈云霁 +1 位作者 郭崎 陈天石 《Journal of Computer Science & Technology》 SCIE EI CSCD 2014年第2期247-254,共8页
Due to the decreasing threshold voltages, shrinking feature size, as well as the exponential growth of on-chip transistors, modern processors are increasingly vulnerable to soft errors. However, traditional mechanisms... Due to the decreasing threshold voltages, shrinking feature size, as well as the exponential growth of on-chip transistors, modern processors are increasingly vulnerable to soft errors. However, traditional mechanisms of soft error mitigation take actions to deal with soft errors only after they have been detected. Instead of the passive responses, this paper proposes a novel mechanism which proactively prevents from the occurrence of soft errors via architecture elasticity. In the light of a predictive model, we adapt the processor architectures h01istically and dynamically. The predictive model provides the ability to quickly and accurately predict the simulation target across different program execution phases on any architecture configurations by leveraging an artificial neural network model. Experimental results on SPEC CPU 2000 benchmarks show that our method inherently reduces the soft error rate by 33.2% and improves the energy efficiency by 18.3% as compared with the static configuration processor. 展开更多
关键词 soft error energy efficiency architecture elasticity
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部