期刊文献+
共找到4,414篇文章
< 1 2 221 >
每页显示 20 50 100
Scaled Up Chip Pushes Quantum Computing a Bit Closer to Reality
1
作者 Chris Palmer 《Engineering》 2025年第7期6-8,共3页
In the 9 December 2024 issue of Nature[1],a team of Google engineers reported breakthrough results using“Willow”,their lat-est quantum computing chip(Fig.1).By meeting a milestone“below threshold”reduction in the ... In the 9 December 2024 issue of Nature[1],a team of Google engineers reported breakthrough results using“Willow”,their lat-est quantum computing chip(Fig.1).By meeting a milestone“below threshold”reduction in the rate of errors that plague super-conducting circuit-based quantum computing systems(Fig.2),the work moves the field another step towards its promised super-charged applications,albeit likely still many years away.Areas expected to benefit from quantum computing include,among others,drug discovery,materials science,finance,cybersecurity,and machine learning. 展开更多
关键词 materials science BREAKTHROUGH drug discovery willow chip quantum computing superconducting circuits error reduction applications
在线阅读 下载PDF
Multifunctional Organic Materials,Devices,and Mechanisms for Neuroscience,Neuromorphic Computing,and Bioelectronics
2
作者 Felix L.Hoch Qishen Wang +1 位作者 Kian-Guan Lim Desmond K.Loke 《Nano-Micro Letters》 2025年第10期525-550,共26页
Neuromorphic computing has the potential to overcome limitations of traditional silicon technology in machine learning tasks.Recent advancements in large crossbar arrays and silicon-based asynchronous spiking neural n... Neuromorphic computing has the potential to overcome limitations of traditional silicon technology in machine learning tasks.Recent advancements in large crossbar arrays and silicon-based asynchronous spiking neural networks have led to promising neuromorphic systems.However,developing compact parallel computing technology for integrating artificial neural networks into traditional hardware remains a challenge.Organic computational materials offer affordable,biocompatible neuromorphic devices with exceptional adjustability and energy-efficient switching.Here,the review investigates the advancements made in the development of organic neuromorphic devices.This review explores resistive switching mechanisms such as interface-regulated filament growth,molecular-electronic dynamics,nanowire-confined filament growth,and vacancy-assisted ion migration,while proposing methodologies to enhance state retention and conductance adjustment.The survey examines the challenges faced in implementing low-power neuromorphic computing,e.g.,reducing device size and improving switching time.The review analyses the potential of these materials in adjustable,flexible,and low-power consumption applications,viz.biohybrid spiking circuits interacting with biological systems,systems that respond to specific events,robotics,intelligent agents,neuromorphic computing,neuromorphic bioelectronics,neuroscience,and other applications,and prospects of this technology. 展开更多
关键词 Resistive switching mechanisms Organic materials brain-inspired neuromorphic computing NEUROSCIENCE Neuromorphic bioelectronics
在线阅读 下载PDF
Robotic computing system and embodied AI evolution:an algorithm-hardware co-design perspective
3
作者 Longke Yan Xin Zhao +7 位作者 Bohan Yang Yongkun Wu Guangnan Dai Jiancong Li Chi-Ying Tsui Kwang-Ting Cheng Yihan Zhang Fengbin Tu 《Journal of Semiconductors》 2025年第10期6-23,共18页
Robotic computing systems play an important role in enabling intelligent robotic tasks through intelligent algo-rithms and supporting hardware.In recent years,the evolution of robotic algorithms indicates a roadmap fr... Robotic computing systems play an important role in enabling intelligent robotic tasks through intelligent algo-rithms and supporting hardware.In recent years,the evolution of robotic algorithms indicates a roadmap from traditional robotics to hierarchical and end-to-end models.This algorithmic advancement poses a critical challenge in achieving balanced system-wide performance.Therefore,algorithm-hardware co-design has emerged as the primary methodology,which ana-lyzes algorithm behaviors on hardware to identify common computational properties.These properties can motivate algo-rithm optimization to reduce computational complexity and hardware innovation from architecture to circuit for high performance and high energy efficiency.We then reviewed recent works on robotic and embodied AI algorithms and computing hard-ware to demonstrate this algorithm-hardware co-design methodology.In the end,we discuss future research opportunities by answering two questions:(1)how to adapt the computing platforms to the rapid evolution of embodied AI algorithms,and(2)how to transform the potential of emerging hardware innovations into end-to-end inference improvements. 展开更多
关键词 robotic computing system embodied AI algorithm-hardware co-design AI chip large-scale AI models
在线阅读 下载PDF
An improved memristor model for brain-inspired computing 被引量:1
4
作者 周二瑞 方粮 +1 位作者 刘汝霖 汤振森 《Chinese Physics B》 SCIE EI CAS CSCD 2017年第11期537-543,共7页
Memristors, as memristive devices, have received a great deal of interest since being fabricated by HP labs. The forgetting effect that has significant influences on memristors' performance has to be taken into accou... Memristors, as memristive devices, have received a great deal of interest since being fabricated by HP labs. The forgetting effect that has significant influences on memristors' performance has to be taken into account when they are employed. It is significant to build a good model that can express the forgetting effect well for application researches due to its promising prospects in brain-inspired computing. Some models are proposed to represent the forgetting effect but do not work well. In this paper, we present a novel window function, which has good performance in a drift model. We analyze the deficiencies of the previous drift diffusion models for the forgetting effect and propose an improved model. Moreover,the improved model is exploited as a synapse model in spiking neural networks to recognize digit images. Simulation results show that the improved model overcomes the defects of the previous models and can be used as a synapse model in brain-inspired computing due to its synaptic characteristics. The results also indicate that the improved model can express the forgetting effect better when it is employed in spiking neural networks, which means that more appropriate evaluations can be obtained in applications. 展开更多
关键词 memristor drift diffusion model synaptic brain-inspired computing
原文传递
New challenge for bionics--brain-inspired computing
5
作者 Shan YU 《Zoological Research》 CAS CSCD 2016年第5期261-262,共2页
By definition, bionics is the application of biological mechanisms found in nature to artificial systems in order to achieve specific functional goals. Successful examples range from Velcro, the touch fastener inspire... By definition, bionics is the application of biological mechanisms found in nature to artificial systems in order to achieve specific functional goals. Successful examples range from Velcro, the touch fastener inspired by the hooks of burrs, to self-cleaning material, inspired by the surface of the lotus leaf. Recently, a new trend in bionics i Brain-Inspired Computing (BIC) - has captured increasing attention. Instead of learning from burrs and leaves, BIC aims to understand the brain and then utilize its operating principles to achieve powerful and efficient information processing. 展开更多
关键词 brain-inspired computing New challenge for bionics BIC
在线阅读 下载PDF
A 2D/3D vision chip based on organic substrate 3D package
6
作者 Siyuan Wei Quanmin Chen +10 位作者 Jingyi Yu Xuanzhe Xu Yuxiao Wen Runjiang Dou Shuangming Yu Guike Li Kaiming Nie Jie Cheng Jiangtao Xu Liyuan Liu Nanjian Wu 《Journal of Semiconductors》 2025年第10期25-33,共9页
This paper describes a 2D/3D vision chip with integrated sensing and processing capabilities.The 2D/3D vision chip architecture includes a 2D/3D image sensor and a programmable visual processor.In this architecture,we... This paper describes a 2D/3D vision chip with integrated sensing and processing capabilities.The 2D/3D vision chip architecture includes a 2D/3D image sensor and a programmable visual processor.In this architecture,we design a novel on-chip processing flow with die-to-die image transmission and low-latency fixed-point image processing.The vision chip achieves real-time end-to-end processing of convolutional neural networks(CNNs)and conventional image processing algo-rithms.Furthermore,an end-to-end 2D/3D vision system is built to exhibit the capacity of the vision chip.The vision system achieves real-timing applications under 2D and 3D scenes,such as human face detection(processing delay 10.2 ms)and depth map reconstruction(processing delay 4.1 ms).The frame rate of image acquisition,image process,and result display is larger than 30 fps. 展开更多
关键词 vision chip 2-D/3-D image processing near-sensor computing convolutional neural networks
在线阅读 下载PDF
A Reconfigurable Network-on-Chip Datapath for Application Specific Computing
7
作者 Joshua Weber Erdal Oruklu 《Circuits and Systems》 2013年第2期181-192,共12页
This paper introduces a new datapath architecture for reconfigurable processors. The proposed datapath is based on Network-on-Chip approach and facilitates tight coupling of all functional units. Reconfigurable functi... This paper introduces a new datapath architecture for reconfigurable processors. The proposed datapath is based on Network-on-Chip approach and facilitates tight coupling of all functional units. Reconfigurable functional elements can be dynamically allocated for application specific optimizations, enabling polymorphic computing. Using a modified network simulator, performance of several NoC topologies and parameters are investigated with standard benchmark programs, including fine grain and coarse grain computations. Simulation results highlight the flexibility and scalability of the proposed polymorphic NoC processor for a wide range of application domains. 展开更多
关键词 RECONFIGURABLE computing NETWORK-ON-chip NETWORK Simulators POLYMORPHIC computing
暂未订购
THEORETICAL PREDICTION OF TOOL-CHIP CONTACT LENGTH IN ORTHOGONAL METAL MACHINING BY COMPUTER SIMULATION 被引量:3
8
作者 Gu Lizhi Long Zeming Cao LiwenCollege of Mechanical Engineering, Jiamusi University, Jiamusi 154007, ChinaYuan Zhejun Harbin Institute of Technology 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2002年第3期233-237,共5页
A method for determination of tool-chip contact length is theoreticallypresented in orthogonal metal machining. By using computer simulation and based on the analyses ofthe elastro-plastic deformation with lagrangian ... A method for determination of tool-chip contact length is theoreticallypresented in orthogonal metal machining. By using computer simulation and based on the analyses ofthe elastro-plastic deformation with lagrangian finite element method in the deformation zone, theaccumulated representative length of the low layer, the tool-chip contact length of the chipcontacting the tool rake are calculated, experimental studies are also carried out with 0.2 percentcarbon steel. It is shown that the tool-chip contact lengths obtained from computer simulation havea good agreement with those of measured values. 展开更多
关键词 Tool-chip contact length computer simulation Finite element method Elastro-plastic deformation Representative length of an element
在线阅读 下载PDF
The Application of Multitasking Mechanism in Single Chip Computer System 被引量:1
9
作者 Yu Jin Huang Jiwu Yuan Lanying 《Wuhan University Journal of Natural Sciences》 CAS 1999年第1期59-62,共4页
Developed a new program structure using in single chip computer system, which based on multitasking mechanism. Discussed the specific method for realization of the new structure. The applied sample is also provided.
关键词 multitasking mechanism single chip computer system interruption mechanism
在线阅读 下载PDF
DEVELOPMENT OF SINGLE-PHASED WATER-COOLING RADIATOR FOR COMPUTER CHIP 被引量:4
10
作者 ZENG Ping CHENG Guangming +3 位作者 LIU Jiulong YANG Zhigang SUN Xiaofeng PENG Taijiang 《Chinese Journal of Mechanical Engineering》 SCIE EI CAS CSCD 2007年第2期77-81,共5页
In order to cool computer chip efficiently with the least noise, a single phase water-cooling radiator for computer chip driven by piezoelectric pump with two parallel-connection chambers is developed. The structure a... In order to cool computer chip efficiently with the least noise, a single phase water-cooling radiator for computer chip driven by piezoelectric pump with two parallel-connection chambers is developed. The structure and work principle of this radiator is described. Material, processing method and design principles of whole radiator are also explained. Finite element analysis (FEA) software, ANSYS, is used to simulate the heat distribution in the radiator. Testing equipments for water-cooling radiator are also listed. By experimental tests, influences of flowrate inside the cooling system and fan on chip cooling are explicated. This water-cooling radiator is proved more efficient than current air-cooling radiator with comparison experiments. During cooling the heater which simulates the working of computer chip with different power, the water-cooling radiator needs shorter time to reach lower steady temperatures than current air-cooling radiator. 展开更多
关键词 computer chip Water-cooling Piezoelectric pump Radiator ANSYS simulation Simulative heater
在线阅读 下载PDF
A Fully-Integrated Memristor Chip for Edge Learning 被引量:1
11
作者 Yanhong Zhang Liang Chu Wenjun Li 《Nano-Micro Letters》 SCIE EI CAS CSCD 2024年第9期123-127,共5页
It is still challenging to fully integrate computing in memory chip as edge learning devices.In recent work published on Science,a fully-integrated chip based on neuromorphic memristors was developed for edge learning... It is still challenging to fully integrate computing in memory chip as edge learning devices.In recent work published on Science,a fully-integrated chip based on neuromorphic memristors was developed for edge learning as artificial neural networks with functionality of synapses,dendrites,and somas.A crossbar-array memristor chip facilitated edge learning including hardware realization,learning algorithm,and cycle-parallel sign-and threshold-based learning(STELLAR)scheme.The motion control and demonstration platforms were executed to improve the edge learning ability for adapting to new scenarios. 展开更多
关键词 computing in memory Edge learning Fully-integrated chip
在线阅读 下载PDF
Research on General-Purpose Brain-Inspired Computing Systems
12
作者 渠鹏 纪兴龙 +4 位作者 陈嘉杰 庞猛 李宇晨 刘晓义 张悠慧 《Journal of Computer Science & Technology》 SCIE EI CSCD 2024年第1期4-21,共18页
Brain-inspired computing is a new technology that draws on the principles of brain science and is oriented to the efficient development of artificial general intelligence(AGI),and a brain-inspired computing system is ... Brain-inspired computing is a new technology that draws on the principles of brain science and is oriented to the efficient development of artificial general intelligence(AGI),and a brain-inspired computing system is a hierarchical system composed of neuromorphic chips,basic software and hardware,and algorithms/applications that embody this tech-nology.While the system is developing rapidly,it faces various challenges and opportunities brought by interdisciplinary research,including the issue of software and hardware fragmentation.This paper analyzes the status quo of brain-inspired computing systems.Enlightened by some design principle and methodology of general-purpose computers,it is proposed to construct"general-purpose"brain-inspired computing systems.A general-purpose brain-inspired computing system refers to a brain-inspired computing hierarchy constructed based on the design philosophy of decoupling software and hardware,which can flexibly support various brain-inspired computing applications and neuromorphic chips with different architec-tures.Further,this paper introduces our recent work in these aspects,including the ANN(artificial neural network)/SNN(spiking neural network)development tools,the hardware agnostic compilation infrastructure,and the chip micro-archi-tecture with high flexibility of programming and high performance;these studies show that the"general-purpose"system can remarkably improve the efficiency of application development and enhance the productivity of basic software,thereby being conductive to accelerating the advancement of various brain-inspired algorithms and applications.We believe that this is the key to the collaborative research and development,and the evolution of applications,basic software and chips in this field,and conducive to building a favorable software/hardware ecosystem of brain-inspired computing. 展开更多
关键词 brain-inspired computing neuromorphic chip COMPILER spiking neural network
原文传递
集成光计算:现状、挑战与展望(特邀)
13
作者 项水英 王一芝 +11 位作者 牛欣然 余梦婷 张钰娜 余澄扬 曾鑫涛 郑殿壮 张雅慧 郭星星 韩亚楠 解长健 王涛 郝跃 《光子学报》 北大核心 2025年第9期100-118,共19页
人工智能、深度学习及大模型的飞速发展对算力和能源提出了迫切需求。传统电子计算芯片依赖冯诺伊曼架构,越来越难以支撑人工智能所需的训练及推理算力需求。随着光子集成技术的不断进步,片上集成光子神经网络芯片得到飞速发展,具有超... 人工智能、深度学习及大模型的飞速发展对算力和能源提出了迫切需求。传统电子计算芯片依赖冯诺伊曼架构,越来越难以支撑人工智能所需的训练及推理算力需求。随着光子集成技术的不断进步,片上集成光子神经网络芯片得到飞速发展,具有超高速、大带宽、多维度等优势,成为人工智能底层算力硬件的重要补充。本文回顾了国内外集成光计算方面的研究进展,重点分析了当前面临的挑战,并对未来的发展提出了展望。 展开更多
关键词 光计算 光子神经网络芯片 光子线性计算 光子非线性计算
在线阅读 下载PDF
单片机原理与接口技术课程线上线下混合式教学的实践 被引量:2
14
作者 朱向庆 鄢磊 林厚健 《嘉应学院学报》 2025年第3期92-95,共4页
在新工科背景下,针对专业课学时受到压缩,疫情影响线下教学等因素,以单片机原理与接口技术课程为例,介绍如何实施“互联网+教育”,依托超星“一平三端”教学平台,建设线上教学资源,开展线上线下混合式教学,利用线上教学弥补线下教学的不... 在新工科背景下,针对专业课学时受到压缩,疫情影响线下教学等因素,以单片机原理与接口技术课程为例,介绍如何实施“互联网+教育”,依托超星“一平三端”教学平台,建设线上教学资源,开展线上线下混合式教学,利用线上教学弥补线下教学的不足.实践证明,混合式教学可以拓展学习内容的深度和广度,激发学生的学习主动性,提高学生课堂参与度,增强学生的设计和实践能力,提升学生对课堂教学的满意度. 展开更多
关键词 单片机 混合式教学 线上线下教学 自主学习
在线阅读 下载PDF
云计算领域突出问题探讨
15
作者 王龙 郑磊 钏茗喜 《井冈山大学学报(自然科学版)》 2025年第3期72-83,共12页
针对当前云计算领域的突出问题与挑战。本研究首先讨论了一般云计算领域中存在的突出问题,包括安全性、资源调度和优化、高可用和合规性等。其次探讨了这些突出问题在我国的具体表现,以及我国特有的云计算领域的技术性与非技术性问题,... 针对当前云计算领域的突出问题与挑战。本研究首先讨论了一般云计算领域中存在的突出问题,包括安全性、资源调度和优化、高可用和合规性等。其次探讨了这些突出问题在我国的具体表现,以及我国特有的云计算领域的技术性与非技术性问题,例如公有云和SaaS服务所占比例低、数字化转型与云化的任务重等。此外,还分析探讨了我国云计算所面临的供应链安全问题,例如CPU、内存等核心硬件和高端CPU、GPU、FPGA等高性能芯片的制造与供应,核心软件供应链安全等。对于所探讨的问题,分析了可能的研究方向,提出了可能的应对方案及如何借助云计算技术缓解或屏蔽这类供应链安全问题。 展开更多
关键词 云计算 云安全 高可用 供应链安全 芯片供应安全
在线阅读 下载PDF
BIVM:类脑计算编译框架及其原型研究
16
作者 杨乐 刘晓义 +3 位作者 李广力 渠鹏 崔慧敏 张悠慧 《软件学报》 北大核心 2025年第10期4768-4791,共24页
各类新型架构的类脑计算芯片正不断涌现,类脑神经网络训练/学习算法和高效的生物神经网络仿真也是研究热点.但如何在架构迥异的类脑计算芯片上优化运行计算/访存特征不同的类脑应用是关键难点,也是建立类脑计算良好生态环境的重点,而通... 各类新型架构的类脑计算芯片正不断涌现,类脑神经网络训练/学习算法和高效的生物神经网络仿真也是研究热点.但如何在架构迥异的类脑计算芯片上优化运行计算/访存特征不同的类脑应用是关键难点,也是建立类脑计算良好生态环境的重点,而通用计算领域的繁荣生态已经表明,一个灵活、可扩展、可复用的编译框架是解决这一问题的有效途径.为此提出BIVM,一个类脑计算编译框架及其验证原型.BIVM基于领域定制化体系结构(domain specific architecture,DSA)的多层中间表示(multi-level intermediate representation,MLIR)框架,设计了为类脑神经网络定制的多层IR,包括脉冲神经网络方言(高层IR)、由MLIR内置方言为主组成的中间层IR和各类芯片的底层IR.针对不同类脑芯片的体系结构跨度很大且其提供的硬件功能粒度不一等问题,BIVM充分利用MLIR的progressivity特性,所设计的IR能够混合不同的抽象层次和概念(比如混合细粒度指令与某些后端的以交叉开关结构为运算主体的粗粒度运算),从而能够复用软件模块、简化开发;在此基础上,在多层IR的递降转换中灵活组合不同级别的编译优化方法,包括被广泛采纳的SNN特定优化技术(如计算稀疏性挖掘与时空并行度挖掘)和适配目标硬件的底层优化技术,以实现不同后端上的高性能.目前,BIVM原型支持的后端有通用处理器(控制流架构)、具有控制流/数据流混合架构的脉冲神经网络加速芯片(FPGA),以及基于ReRAM(resistive random-access memory,阻变存储器)的数据流架构类脑芯片(软件仿真),能够将智能应用与生物神经网络仿真应用优化编译为适配不同架构芯片的执行程序.随后,进行编译技术适配性分析与性能比较,结果表明该类框架在编译高生产力、高可移植性、高性能方面具有良好潜力. 展开更多
关键词 类脑计算 编译框架 类脑计算芯片
在线阅读 下载PDF
天基计算芯片:现状、趋势与关键技术
17
作者 魏肖彤 许浩博 +9 位作者 尹春笛 黄俊培 孙文昊 徐文浚 王颖 刘垚圻 孟范涛 闵丰 王梦迪 韩银和 《电子与信息学报》 北大核心 2025年第9期2963-2978,共16页
随着航天技术的快速发展,天基计算芯片作为空间信息系统的核心器件,承担着数据处理、任务控制和通信支持等关键功能,其重要性日益凸显。天基计算芯片不仅决定了空间任务的执行效率和可靠性,还在极端环境下为航天器的长期稳定运行提供保... 随着航天技术的快速发展,天基计算芯片作为空间信息系统的核心器件,承担着数据处理、任务控制和通信支持等关键功能,其重要性日益凸显。天基计算芯片不仅决定了空间任务的执行效率和可靠性,还在极端环境下为航天器的长期稳定运行提供保障。该文通过回顾天基计算芯片的发展历程,以探讨其未来发展方向。首先按照结构功能划分,从通用处理器(CPU)、现场可编程门阵列(FPGA)和专用芯片3方面对天基计算芯片的发展现状进行归纳和总结;然后深入分析其与地面芯片的主要区别,探讨针对辐射效应等空间环境挑战的关键容错技术,并从不同层面阐述已有的技术方法;最后论述了天基计算芯片未来的主要发展方向,即大算力、商用现货(COTS)器件广泛应用、第五代精简指令集(RISC-V)架构和芯粒技术。该文能够帮助读者了解该领域现状,掌握关键问题,并为后续的相关研究工作提供有价值的参考和启示。 展开更多
关键词 天基计算芯片 容错技术 大算力 COTS器件 RISC-V架构 芯粒
在线阅读 下载PDF
人工智能大语言模型和AI芯片的新进展 被引量:7
18
作者 赵正平 《微纳电子技术》 2025年第3期1-31,共31页
以ChatGPT为代表的大语言模型的发展标志人工智能(AI)进入“通用人工智能”发展的新时代。综述了通用人工智能“大数据、小任务”专用人工智能发展阶段的两大热点:人工智能大语言模型和AI芯片的最新进展和发展趋势。在人工智能大语言模... 以ChatGPT为代表的大语言模型的发展标志人工智能(AI)进入“通用人工智能”发展的新时代。综述了通用人工智能“大数据、小任务”专用人工智能发展阶段的两大热点:人工智能大语言模型和AI芯片的最新进展和发展趋势。在人工智能大语言模型领域,综述并分析了其发展由来和发展现状,包括专家系统和聊天机器人两条技术路线的发展历程,OpenAI的ChatGPT领跑大模型的发展现状,以及对大模型的综述、深化、改进并推向应用的新进展。在AI芯片领域,综述并分析了在人工智能大模型发展带动下,云计算AI芯片和边缘计算AI芯片的最新进展,包括新一代GPU、TPU、云计算AI芯片新架构、NPU架构的边缘计算AI芯片、数字边缘计算AI芯片、数字CIM基模拟AI芯片和模拟CIM AI芯片。大语言模型创新涌现的特点和AI芯片架构创新的黄金时代特征应该值得高度关注。 展开更多
关键词 ChatGPT 大语言模型 通用人工智能(AI) AI芯片 云计算AI芯片 边缘计算AI芯片
原文传递
人工智能大语言模型和AI芯片的新进展(续) 被引量:6
19
作者 赵正平 《微纳电子技术》 2025年第4期1-33,共33页
以ChatGPT为代表的大语言模型的发展标志人工智能(AI)进入“通用人工智能”发展的新时代。综述了通用人工智能“大数据、小任务”专用人工智能发展阶段的两大热点:人工智能大语言模型和AI芯片的最新进展和发展趋势。在人工智能大语言模... 以ChatGPT为代表的大语言模型的发展标志人工智能(AI)进入“通用人工智能”发展的新时代。综述了通用人工智能“大数据、小任务”专用人工智能发展阶段的两大热点:人工智能大语言模型和AI芯片的最新进展和发展趋势。在人工智能大语言模型领域,综述并分析了其发展由来和发展现状,包括专家系统和聊天机器人两条技术路线的发展历程,OpenAI的ChatGPT领跑大模型的发展现状,以及对大模型的综述、深化、改进并推向应用的新进展。在AI芯片领域,综述并分析了在人工智能大模型发展带动下,云计算AI芯片和边缘计算AI芯片的最新进展,包括新一代GPU、TPU、云计算AI芯片新架构、NPU架构的边缘计算AI芯片、数字边缘计算AI芯片、数字CIM基模拟AI芯片和模拟CIM AI芯片。大语言模型创新涌现的特点和AI芯片架构创新的黄金时代特征应该值得高度关注。 展开更多
关键词 ChatGPT 大语言模型 通用人工智能(AI) AI芯片 云计算AI芯片 边缘计算AI芯片
原文传递
自旋类脑神经形态计算 被引量:1
20
作者 张帅 陈丽娜 刘荣华 《四川师范大学学报(自然科学版)》 CAS 2025年第2期176-191,共16页
类脑计算旨在模拟和实现大脑的信息处理和学习能力,以解决复杂的计算问题,其关键思想之一是模拟生物神经元和突触行为来实现信息传输、处理和存储.自旋电子学器件的非易失性、高速低功耗、几乎无限的耐用性及固有非线性等特点,使其在类... 类脑计算旨在模拟和实现大脑的信息处理和学习能力,以解决复杂的计算问题,其关键思想之一是模拟生物神经元和突触行为来实现信息传输、处理和存储.自旋电子学器件的非易失性、高速低功耗、几乎无限的耐用性及固有非线性等特点,使其在类脑计算上已有广泛尝试和出色表现.基于对自旋电子学中的各类磁电阻效应、自旋转移力矩和自旋轨道力矩效应、电压调控磁各向异性效应以及磁化动力学的非线性效应进行介绍和总结,以各类自旋器件在储备池计算、伊辛机、脉冲神经网络以及真随机数生成器上的应用为实例,展望自旋类脑神经形态计算硬件在未来人工智能芯片领域的发展前景与趋势. 展开更多
关键词 自旋电子学 神经网络 神经形态计算 类脑人工智能芯片
在线阅读 下载PDF
上一页 1 2 221 下一页 到第
使用帮助 返回顶部