期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
ResDecode:Accelerating Large Language Models Inference via Residual Decoding Heads
1
作者 Ziqian Zeng Jiahong Yu +5 位作者 Qianshi Pang Zihao Wang Huiping Zhuang Fan Yu Hongen Shao Xiaofeng Zou 《Big Data Mining and Analytics》 2025年第4期779-793,共15页
Large language Models(LLMs)have immense potential to enhance the capabilities of Cyber-Physical-Social Intelligence(CPSI)systems,enabling them to better engage with complex cyber,physical,and social environments.Howev... Large language Models(LLMs)have immense potential to enhance the capabilities of Cyber-Physical-Social Intelligence(CPSI)systems,enabling them to better engage with complex cyber,physical,and social environments.However,the high inference latency of LLMs,which is inherited from the autoregressive decoding process,hinders their wide application in CPSI systems.To address this challenge,current approaches have incorporated speculative decoding to enable parallel prediction of multiple subsequent tokens,thereby achieving inference acceleration.Nevertheless,the accuracy of these decoding heads falls short of the autoregressive decoding approach.In light of these limitations,we propose ResDecode,a novel speculative decoding method characterized by its efficient and accurate decoding heads.Within the lightweight draft model,we propose a residual decoding head to compensate for the full context encoder’s limited capability on long-range dependencies,thus improving accuracy.ResDecode demonstrates impressive results,achieving a maximum speedup ratio of 3.2×on the MT-bench compared to vanilla autoregressive decoding. 展开更多
关键词 speculative decoding efficient inference Large Language Models(LLMs)
原文传递
YuNet: A Tiny Millisecond-level Face Detector 被引量:4
2
作者 Wei Wu Hanyang Peng Shiqi Yu 《Machine Intelligence Research》 EI CSCD 2023年第5期656-665,共10页
Great progress has been made toward accurate face detection in recent years.However,the heavy model and expensive computation costs make it difficult to deploy many detectors on mobile and embedded devices where model... Great progress has been made toward accurate face detection in recent years.However,the heavy model and expensive computation costs make it difficult to deploy many detectors on mobile and embedded devices where model size and latency are highly constrained.In this paper,we present a millisecond-level anchor-free face detector,YuNet,which is specifically designed for edge devices.There are several key contributions in improving the efficiency-accuracy trade-off.First,we analyse the influential state-of-theart face detectors in recent years and summarize the rules to reduce the size of models.Then,a lightweight face detector,YuNet,is introduced.Our detector contains a tiny and efficient feature extraction backbone and a simplified pyramid feature fusion neck.To the best of our knowledge,YuNet has the best trade-off between accuracy and speed.It has only 75856 parameters and is less than 1/5 of other small-size detectors.In addition,a training strategy is presented for the tiny face detector,and it can effectively train models with the same distribution of the training set.The proposed YuNet achieves 81.1%mAP(single-scale)on the WIDER FACE validation hard track with a high inference efficiency(Intel i7-12700K:1.6ms per frame at 320×320).Because of its unique advantages,the repository for YuNet and its predecessors has been popular at GitHub and gained more than 11K stars at https://github.com/ShiqiYu/libfacedetection.Keywords:Face detection,object detection,computer version,lightweight,inference efficiency,anchor-free mechanism. 展开更多
关键词 Face detection object detection computer version LIGHTWEIGHT inference efficiency anchor-free mechanism.
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部