DeepSeek,a Chinese artificial intelligence(AI)startup,has released their V3 and R1 series models,which attracted global attention due to their low cost,high performance,and open-source advantages.This paper begins by ...DeepSeek,a Chinese artificial intelligence(AI)startup,has released their V3 and R1 series models,which attracted global attention due to their low cost,high performance,and open-source advantages.This paper begins by reviewing the evolution of large AI models focusing on paradigm shifts,the mainstream large language model(LLM)paradigm,and the DeepSeek paradigm.Subsequently,the paper highlights novel algorithms introduced by DeepSeek,including multi-head latent attention(MLA),mixture-of-experts(MoE),multi-token prediction(MTP),and group relative policy optimization(GRPO).The paper then explores DeepSeek's engineering breakthroughs in LLM scaling,training,inference,and system-level optimization architecture.Moreover,the impact of DeepSeek models on the competitive AI landscape is analyzed,comparing them to mainstream LLMs across various fields.Finally,the paper reflects on the insights gained from DeepSeek's innovations and discusses future trends in the technical and engineering development of large AI models,particularly in data,training,and reasoning.展开更多
基金supported by the National Natural Science Foundation of China(62233005,62293502,U2441245,62176185,U23B2057,62306112)the STCSM Science and Technology Innovation Action Plan Computational Biology Program(24JS2830400)+2 种基金the State Key Laboratory of Industrial Control Technology,China(ICT2024A22)the Shanghai Sailing Program(23YF1409400)the National Science and Technology Major Project(2024ZD0532403).
文摘DeepSeek,a Chinese artificial intelligence(AI)startup,has released their V3 and R1 series models,which attracted global attention due to their low cost,high performance,and open-source advantages.This paper begins by reviewing the evolution of large AI models focusing on paradigm shifts,the mainstream large language model(LLM)paradigm,and the DeepSeek paradigm.Subsequently,the paper highlights novel algorithms introduced by DeepSeek,including multi-head latent attention(MLA),mixture-of-experts(MoE),multi-token prediction(MTP),and group relative policy optimization(GRPO).The paper then explores DeepSeek's engineering breakthroughs in LLM scaling,training,inference,and system-level optimization architecture.Moreover,the impact of DeepSeek models on the competitive AI landscape is analyzed,comparing them to mainstream LLMs across various fields.Finally,the paper reflects on the insights gained from DeepSeek's innovations and discusses future trends in the technical and engineering development of large AI models,particularly in data,training,and reasoning.