针对长短码直接扩频序列(long and short code direct sequence spread spectrum, LSC-DSSS)信号序列估计难题,在已知LSC-DSSS信号参数的条件下,提出一种基于新信息准则(novel information criterion, NIC)神经网络联合梅西算法的长短...针对长短码直接扩频序列(long and short code direct sequence spread spectrum, LSC-DSSS)信号序列估计难题,在已知LSC-DSSS信号参数的条件下,提出一种基于新信息准则(novel information criterion, NIC)神经网络联合梅西算法的长短码信号序列估计方法。将LSC-DSSS信号输入NIC神经网络以估计随机采样起点,再通过不断输入数据训练NIC神经网络权值向量。当网络收敛时,权值向量的符号值即为LSC-DSSS信号的复合码序列片段。使用延迟相乘,消除幅度模糊与短扩频码序列的影响,再利用梅西算法获得扰码序列的生成多项式。仿真实验结果表明,NIC神经网络较特征值分解法的抗噪声性能提高6 dB,同时较Hebbian准则神经网络所需学习组数减少50%。展开更多
An effective Luby transform (LT) encoding algorithm based on short cycle elimination is proposed to improve decoding probabilities of short length LT codes. By searching the generator ma- trix, some special encoded ...An effective Luby transform (LT) encoding algorithm based on short cycle elimination is proposed to improve decoding probabilities of short length LT codes. By searching the generator ma- trix, some special encoded symbols are generated by the encoder to effectively break the short cycles that have negative effect on the performance of LT codes. Analysis and numerical results show that by employing the proposed algorithm, the encoding complexity decreases and the decoding probabili- ties improve both in binary erasure channels (BECs) and additive white gauss noise (AWGN) chan- nels.展开更多
In this paper, we propose a novel space efficient secret sharing scheme on the basis of minimal linear codes, which satisfies the definition of a computationally efficient secret sharing scheme. In the scheme, we part...In this paper, we propose a novel space efficient secret sharing scheme on the basis of minimal linear codes, which satisfies the definition of a computationally efficient secret sharing scheme. In the scheme, we partition the underlying minimal linear code into disjoint classes, establishing a one-to-one correspondence between the minimal authorized subsets of participants and the representative codewords of all different classes. Each participant, with only one short share transmitted through a public channel, can share a large secret. Therefore, the proposed scheme can distribute a large secret in practical applications such as secure information dispersal in sensor networks and secure multiparty computation.展开更多
The global growth of the Internet and the rapid expansion of social networks such as Facebook make multilingual sentiment analysis of social media content very necessary. This paper performs the first sentiment analys...The global growth of the Internet and the rapid expansion of social networks such as Facebook make multilingual sentiment analysis of social media content very necessary. This paper performs the first sentiment analysis on code-mixed Bambara-French Facebook comments. We develop four Long Short-term Memory(LSTM)-based models and two Convolutional Neural Network(CNN)-based models, and use these six models, Na?ve Bayes, and Support Vector Machines(SVM) to conduct experiments on a constituted dataset. Social media text written in Bambara is scarce. To mitigate this weakness, this paper uses dictionaries of character and word indexes to produce character and word embedding in place of pre-trained word vectors. We investigate the effect of comment length on the models and perform a comparison among them. The best performing model is a one-layer CNN deep learning model with an accuracy of 83.23 %.展开更多
文摘针对长短码直接扩频序列(long and short code direct sequence spread spectrum, LSC-DSSS)信号序列估计难题,在已知LSC-DSSS信号参数的条件下,提出一种基于新信息准则(novel information criterion, NIC)神经网络联合梅西算法的长短码信号序列估计方法。将LSC-DSSS信号输入NIC神经网络以估计随机采样起点,再通过不断输入数据训练NIC神经网络权值向量。当网络收敛时,权值向量的符号值即为LSC-DSSS信号的复合码序列片段。使用延迟相乘,消除幅度模糊与短扩频码序列的影响,再利用梅西算法获得扰码序列的生成多项式。仿真实验结果表明,NIC神经网络较特征值分解法的抗噪声性能提高6 dB,同时较Hebbian准则神经网络所需学习组数减少50%。
基金Supported by China Mobile Research Institute and China National S&T Major Project(2010ZX03003-003)
文摘An effective Luby transform (LT) encoding algorithm based on short cycle elimination is proposed to improve decoding probabilities of short length LT codes. By searching the generator ma- trix, some special encoded symbols are generated by the encoder to effectively break the short cycles that have negative effect on the performance of LT codes. Analysis and numerical results show that by employing the proposed algorithm, the encoding complexity decreases and the decoding probabili- ties improve both in binary erasure channels (BECs) and additive white gauss noise (AWGN) chan- nels.
基金Supported by the National Natural Science Foundation of China (11271237)
文摘In this paper, we propose a novel space efficient secret sharing scheme on the basis of minimal linear codes, which satisfies the definition of a computationally efficient secret sharing scheme. In the scheme, we partition the underlying minimal linear code into disjoint classes, establishing a one-to-one correspondence between the minimal authorized subsets of participants and the representative codewords of all different classes. Each participant, with only one short share transmitted through a public channel, can share a large secret. Therefore, the proposed scheme can distribute a large secret in practical applications such as secure information dispersal in sensor networks and secure multiparty computation.
基金Supported by the National Natural Science Foundation of China(61272451,61572380,61772383 and 61702379)the Major State Basic Research Development Program of China(2014CB340600)
文摘The global growth of the Internet and the rapid expansion of social networks such as Facebook make multilingual sentiment analysis of social media content very necessary. This paper performs the first sentiment analysis on code-mixed Bambara-French Facebook comments. We develop four Long Short-term Memory(LSTM)-based models and two Convolutional Neural Network(CNN)-based models, and use these six models, Na?ve Bayes, and Support Vector Machines(SVM) to conduct experiments on a constituted dataset. Social media text written in Bambara is scarce. To mitigate this weakness, this paper uses dictionaries of character and word indexes to produce character and word embedding in place of pre-trained word vectors. We investigate the effect of comment length on the models and perform a comparison among them. The best performing model is a one-layer CNN deep learning model with an accuracy of 83.23 %.