期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
AI Computing Systems for Large Langguage Models Training 被引量:1
1
作者 Zhen-Xing Zhang Yuan-Bo Wen +9 位作者 Han-Qi Lyu Chang Liu Rui Zhang Xia-Qing Li Chao Wang Zi-Dong Du Qi Guo Ling Li Xue-Hai Zhou Yun-Ji Chen 《Journal of Computer Science & Technology》 2025年第1期6-41,共36页
In this paper,we present a comprehensive overview of artificial intelligence(AI)computing systems for large language models(LLMs)training.The rapid advancement of LLMs in recent years,coupled with the widespread adopt... In this paper,we present a comprehensive overview of artificial intelligence(AI)computing systems for large language models(LLMs)training.The rapid advancement of LLMs in recent years,coupled with the widespread adoption of algorithms and applications such as BERT,ChatGPT,and DeepSeek,has sparked significant interest in this field.We classify LLMs into encoder-only,encoder-decoder,and decoder-only models,and briefly analyze their training and inference processes to emphasize their substantial need for computational resources.These operations depend heavily on Alspecific accelerators like GPUs(graphics processing units),TPUs(tensor processing units),and MLUs(machine learning units).However,as the gap widens between the increasing complexity of LLMs and the current capabilities of accelerators,it becomes essential to adopt heterogeneous computing systems optimized for distributed environments to manage the growing computational and memory requirements of LLMs.We delve into the execution and scheduling of LLM algorithms,underlining the critical role of distributed computing strategies,memory management enhancements,and boosting computational efficiency.This paper clarifies the complex relationship between algorithm design,hardware infrastructure,and software optimization,and provides an in-depth understanding of both the software and hardware infrastructure supporting LLMs training,offering insights into the challenges and potential avenues for future development and deployment. 展开更多
关键词 artificial intelligence(ai)chip large language model(LLM) ai computing system ACCELERATOR
原文传递
小米“手机×AIoT”安全隐私技术 被引量:2
2
作者 崔宝秋 宋文宽 +4 位作者 王宝林 潘双全 张晓芳 赵彤彤 吕莹楠 《武汉大学学报(理学版)》 CAS CSCD 北大核心 2022年第1期1-7,共7页
在万物互联时代,安全和隐私风险逐步扩大,越来越多的人开始担忧产品的安全和隐私问题。小米集团具有手机和物联网等多种业务形态,“手机×AIoT”也已成为小米的核心战略。围绕手机和AIoT(人工智能物联网),小米在信息安全与隐私保护... 在万物互联时代,安全和隐私风险逐步扩大,越来越多的人开始担忧产品的安全和隐私问题。小米集团具有手机和物联网等多种业务形态,“手机×AIoT”也已成为小米的核心战略。围绕手机和AIoT(人工智能物联网),小米在信息安全与隐私保护方面面临着非常大的挑战,也做了大量的工作。本文基于小米的信息安全和隐私保护发展历史,介绍了在手机、IoT以及AI领域的信息安全和隐私保护技术。这些技术包括了小米可信执行环境MiTEE(Mi trusted execution environment)、差分隐私技术、MIUI隐私保护技术、AI算法隐私保护、移动端深度学习框架MACE(mobile AI compute engine)、IoT软件开发平台Xiaomi Vela,以及IoT的其他安全技术能力等。 展开更多
关键词 信息安全 隐私保护 MIUI 差分隐私 MiTEE(Mi trusted execution environment) MACE(mobile ai compute engine) Xiaomi Vela
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部