期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Multi-Candidate Voting Model Based on Blockchain 被引量:3
1
作者 Dongliang Xu Wei Shi +1 位作者 Wensheng Zhai Zhihong Tian 《IEEE/CAA Journal of Automatica Sinica》 SCIE EI CSCD 2021年第12期1891-1900,共10页
Electronic voting has partially solved the problems of poor anonymity and low efficiency associated with traditional voting.However,the difficulties it introduces into the supervision of the vote counting,as well as i... Electronic voting has partially solved the problems of poor anonymity and low efficiency associated with traditional voting.However,the difficulties it introduces into the supervision of the vote counting,as well as its need for a concurrent guaranteed trusted third party,should not be overlooked.With the advent of blockchain technology in recent years,its features such as decentralization,anonymity,and non-tampering have made it a good candidate in solving the problems that electronic voting faces.In this study,we propose a multi-candidate voting model based on the blockchain technology.With the introduction of an asymmetric encryption and an anonymity-preserving voting algorithm,votes can be counted without relying on a third party,and the voting results can be displayed in real time in a manner that satisfies various levels of voting security and privacy requirements.Experimental results show that the proposed model solves the aforementioned problems of electronic voting without significant negative impact from an increasing number of voters or candidates. 展开更多
关键词 Blockchain multi-candidate voting model VOTING voting anonymity confusion algorithm
在线阅读 下载PDF
Fast collaborative inference via distributed speculative decoding
2
作者 Ce Zheng Ke Zhang +3 位作者 Chen Sun Wenqi Zhang Qiong Liu Angesom Ataklity Tesfay 《Journal of Information and Intelligence》 2026年第1期67-85,共19页
Speculative decoding accelerates Large Language Model(LLM)inference by allowing a lightweight draft model to predict multiple future tokens that are subsequently verified by a larger target model.In AI-native Radio Ac... Speculative decoding accelerates Large Language Model(LLM)inference by allowing a lightweight draft model to predict multiple future tokens that are subsequently verified by a larger target model.In AI-native Radio Access Networks(AI-RAN),this mechanism naturally enables device-edge collaborative inference.However,existing distributed speculative decoding schemes incur significant uplink communication overhead,as they require transmitting full-vocabulary logits at every decoding step.To address this challenge,we propose a sparsify-then-sample strategy,termed Truncated Sparse Logits Transmission(TSLT),which transmits only the logits and indices of a truncated candidate set.We provide theoretical guarantees showing that TSLT preserves the acceptance rate of speculative decoding.The proposed framework is further extended to a multi-candidate setting,where multiple draft candidates per step increase the acceptance probability.Extensive experiments demonstrate that TSLT substantially reduces uplink communication while maintaining end-to-end inference latency and model quality,validating its effectiveness for scalable and communication-efficient distributed LLM inference in future AI-RAN systems. 展开更多
关键词 Collaborative inference Speculative decoding Truncated sampling multi-candidate Token tree
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部