The syndrome a posteriori probability of the log-likelihood ratio of intercepted codewords is used to develop an algorithm that recognizes the polar code length and generator matrix of the underlying polar code.Based ...The syndrome a posteriori probability of the log-likelihood ratio of intercepted codewords is used to develop an algorithm that recognizes the polar code length and generator matrix of the underlying polar code.Based on the encoding structure,three theorems are proved,two related to the relationship between the length and rate of the polar code,and one related to the relationship between frozen-bit positions,information-bit positions,and codewords.With these three theorems,polar codes can be quickly reconstruced.In addition,to detect the dual vectors of codewords,the statistical characteristics of the log-likelihood ratio are analyzed,and then the information-and frozen-bit positions are distinguished based on the minimumerror decision criterion.The bit rate is obtained.The correctness of the theorems and effectiveness of the proposed algorithm are validated through simulations.The proposed algorithm exhibits robustness to noise and a reasonable computational complexity.展开更多
Due to differences in the distribution of scores for different trials, the performance of a speaker verification system will be seriously diminished if raw scores are directly used for detection with a unified thresho...Due to differences in the distribution of scores for different trials, the performance of a speaker verification system will be seriously diminished if raw scores are directly used for detection with a unified threshold value. As such, the scores must be normalized. To tackle the shortcomings of score normalization methods, we propose a speaker verification system based on log-likelihood normalization (LLN). Without a priori knowledge, LLN increases the separation between scores of target and non-target speaker models, so as to improve score aliasing of “same-speaker” and “different-speaker” trials corresponding to the same test speech, enabling better discrimination and decision capability. The experiment shows that LLN is an effective method of scoring normalization.展开更多
Due to the openness of the cognitive radio network, spectrum sensing data falsification (SSDF) can attack the spectrum sensing easily, while there is no effective algorithm proposed in current research work, so this...Due to the openness of the cognitive radio network, spectrum sensing data falsification (SSDF) can attack the spectrum sensing easily, while there is no effective algorithm proposed in current research work, so this paper introduces the malicious users removing to the weight sequential probability radio test (WSPRT). The terminals' weight is weighted by the accuracy of their spectrum sensing information, which can also be used to detect the malicious user. If one terminal owns a low weight, it can be treated as malicious user, and should be removed from the aggregation center. Simulation results show that the improved WSPRT can achieve higher performance compared with the other two conventional sequential detection methods under different number of malicious users.展开更多
基金supported by the National Natural Science Foundation of China(62371465)Taishan Scholar Project of Shandong Province(ts201511020)the Chinese National Key Laboratory of Science and Technology on Information System Security(6142111190404).
文摘The syndrome a posteriori probability of the log-likelihood ratio of intercepted codewords is used to develop an algorithm that recognizes the polar code length and generator matrix of the underlying polar code.Based on the encoding structure,three theorems are proved,two related to the relationship between the length and rate of the polar code,and one related to the relationship between frozen-bit positions,information-bit positions,and codewords.With these three theorems,polar codes can be quickly reconstruced.In addition,to detect the dual vectors of codewords,the statistical characteristics of the log-likelihood ratio are analyzed,and then the information-and frozen-bit positions are distinguished based on the minimumerror decision criterion.The bit rate is obtained.The correctness of the theorems and effectiveness of the proposed algorithm are validated through simulations.The proposed algorithm exhibits robustness to noise and a reasonable computational complexity.
文摘Due to differences in the distribution of scores for different trials, the performance of a speaker verification system will be seriously diminished if raw scores are directly used for detection with a unified threshold value. As such, the scores must be normalized. To tackle the shortcomings of score normalization methods, we propose a speaker verification system based on log-likelihood normalization (LLN). Without a priori knowledge, LLN increases the separation between scores of target and non-target speaker models, so as to improve score aliasing of “same-speaker” and “different-speaker” trials corresponding to the same test speech, enabling better discrimination and decision capability. The experiment shows that LLN is an effective method of scoring normalization.
基金supported by the National Natural Science Foundation of China(61172073)the State Key Laboratory of Rail Traffic Control and Safety Beijing Jiaotong University(RCS2011ZT003)+2 种基金the Open Research Fund of Key Laboratory of Wireless Sensor Network & Communication,Chinese Academy of Sciences(2011005)the Fundamental Research Funds for the Central Universities of Ministry of Education of China(2013JBZ001,2012YJS129,2009JBM012)the Program for New Century Excellent Talents in University of Ministry of China(NCET-12-0766)
文摘Due to the openness of the cognitive radio network, spectrum sensing data falsification (SSDF) can attack the spectrum sensing easily, while there is no effective algorithm proposed in current research work, so this paper introduces the malicious users removing to the weight sequential probability radio test (WSPRT). The terminals' weight is weighted by the accuracy of their spectrum sensing information, which can also be used to detect the malicious user. If one terminal owns a low weight, it can be treated as malicious user, and should be removed from the aggregation center. Simulation results show that the improved WSPRT can achieve higher performance compared with the other two conventional sequential detection methods under different number of malicious users.
文摘卷积码的盲识别是级联码、Turbo码等高性能编码盲识别的基础,这要求卷积码盲识别方法具有较高的抗噪能力.使用接收解调的软判决信息是提高抗噪能力的关键.本文首先通过理论分析,从概率分布的角度解释现有软判决方法抗噪能力不足的原因,即汉明重量较小的候选解向量会严重削弱现有方法的识别正确概率.然后,提出一种基于最小二乘代价函数的解决方案,理论证明它能够有效减轻汉明重量对识别性能的影响.最后,通过仿真实验,对理论分析的结论进行验证.理论和实验表明,所提的新方法能将卷积码盲识别的抗噪能力提升约1d B.