In this paper, we propose to generalize the coding schemes first proposed by Kozic &al to high spectral efficient modulation schemes. We study at first Chaos Coded Modulation based on the use of small ...In this paper, we propose to generalize the coding schemes first proposed by Kozic &al to high spectral efficient modulation schemes. We study at first Chaos Coded Modulation based on the use of small dimensional modulo-MAP encoding process and we give a solution to study the distance spectrum of such coding schemes to accurately predict their performances. However, the obtained performances are quite poor. To improve them, we use then a high dimensional modulo-MAP mapping process similar to the low-density generator-matrix codes (LDGM) introduced by Kozic &al. The main difference with their work is that we use an encoding and decoding process on GF (2m) which enables to obtain better performances while preserving a quite simple decoding algorithm when we use the Extended Min-Sum (EMS) algorithm of Declercq &Fossorier.展开更多
The progressive edge-growth(PEG)al-gorithm is a general method to construct short low-density parity-check(LDPC)codes and it is a greedy method to place each edge with large girths.In order to improve the performance ...The progressive edge-growth(PEG)al-gorithm is a general method to construct short low-density parity-check(LDPC)codes and it is a greedy method to place each edge with large girths.In order to improve the performance of LDPC codes,many im-proved PEG(IPEG)algorithms employ multi metrics to select surviving edges in turn.In this paper,the pro-posed edges metric(EM)based on message-passing algorithm(MPA)is introduced to PEG algorithm and the proposed EM constrained PEG(EM-PEG)algo-rithm mainly considers the independence of message passing from different nodes in Tanner graph.The numerical results show that our EM-PEG algorithm brings better bit error rate(BER)performance gains to LDPC codes than the traditional PEG algorithm and the powerful multi-edge multi-metric constrained PEG algorithm(MM-PEGA)proposed recently.In ad-dition,the multi-edge EM constrained PEG(M-EM-PEG)algorithm which adopts multi-edge EM may fur-ther improve the BER performance.展开更多
In this paper,a family of rate-compatible(RC) low-density parity-check(LDPC) convolutional codes can be obtained from RC-LDPC block codes by graph extension method.The resulted RC-LDPC convolutional codes,which are de...In this paper,a family of rate-compatible(RC) low-density parity-check(LDPC) convolutional codes can be obtained from RC-LDPC block codes by graph extension method.The resulted RC-LDPC convolutional codes,which are derived by permuting the matrices of the corresponding RC-LDPC block codes,are systematic and have maximum encoding memory.Simulation results show that the proposed RC-LDPC convolutional codes with belief propagation(BP) decoding collectively offer a steady improvement on performance compared with the block counterparts over the binary-input additive white Gaussian noise channels(BI-AWGNCs).展开更多
Let Ф(u ×v, k, Aa, Ac) be the largest possible number of codewords among all two- dimensional (u ×v, k, λa, λc) optical orthogonal codes. A 2-D (u× v, k, λa, λ)-OOC with Ф(u× v, k, λ...Let Ф(u ×v, k, Aa, Ac) be the largest possible number of codewords among all two- dimensional (u ×v, k, λa, λc) optical orthogonal codes. A 2-D (u× v, k, λa, λ)-OOC with Ф(u× v, k, λa, λc) codewords is said to be maximum. In this paper, the number of codewords of a maximum 2-D (u × v, 4, 1, 3)-OOC has been determined.展开更多
Coupling with a three dimensional (3D) hydrodynamic model and a suspended solids model, a 3D model for the transport of Fe and Mn in Arha Reservoir, China, was developed. The 3D velocity fields for the flood season a...Coupling with a three dimensional (3D) hydrodynamic model and a suspended solids model, a 3D model for the transport of Fe and Mn in Arha Reservoir, China, was developed. The 3D velocity fields for the flood season are computed to drive the 3D model of Fe and Mn in which the processes of advection, diffusion, redox, sorption, desorption, deposition, and re suspension are included. The model has been calibrated by matching observed fluid, suspended solids, and total concentrations of Fe and Mn in the water column and in the sediment, successively. The model simulated both horizontal and vertical gradients of Fe and Mn in Arha Reservoir. It was found that Fe and especially Mn stratify in accordance with the stratification of DO during summer. The redox cycles across the water sediment interface has a principal role in the rise of Fe and Mn concentrations in the overlying water. It was also found that Fe and Mn loadings from the tributaries have a carryover effect on the water quality through a secondary contamination in the reservoir.展开更多
In this paper a series of digital image processing methods were adopted for getting separated coarse aggregates from asphalt mixture specimen using high-resolution X-ray Computed Tomography (CT) images.The existing th...In this paper a series of digital image processing methods were adopted for getting separated coarse aggregates from asphalt mixture specimen using high-resolution X-ray Computed Tomography (CT) images.The existing three dimensional (3D) particles matching methods based on two dimensional (2D) continuous cross-sections were analyzed and a new 'overlap area method' was presented.After the 3D particles were extracted one by one successfully,the basic parameters of each aggregate:perimeter,area,surface area,and volume were calculated by chain code method.Finally,the 3D mass center coordinates and the sphericity index were introduced.展开更多
The error correction performance of Belief Propagation(BP)decoding for polar codes is satisfactory compared with the Successive Cancellation(SC)decoding.Nevertheless,it has to complete a fixed number of iterations,whi...The error correction performance of Belief Propagation(BP)decoding for polar codes is satisfactory compared with the Successive Cancellation(SC)decoding.Nevertheless,it has to complete a fixed number of iterations,which results in high computational complexity.This necessitates an intelligent identification of successful BP decoding for early termination of the decoding process to avoid unnecessary iterations and minimize the computational complexity of BP decoding.This paper proposes a hybrid technique that combines the“paritycheck”with the“G-matrix”to reduce the computational complexity of BP decoder for polar codes.The proposed hybrid technique takes advantage of the parity-check to intelligently identify the valid codeword at an early stage and terminate the BP decoding process,which minimizes the overhead of the G-matrix and reduces the computational complexity of BP decoding.We explore a detailed mechanism incorporating the parity bits as outer code and prove that the proposed hybrid technique minimizes the computational complexity while preserving the BP error correction performance.Moreover,mathematical formulation for the proposed hybrid technique that minimizes the computation cost of the G-matrix is elaborated.The performance of the proposed hybrid technique is validated by comparing it with the state-of-the-art early stopping criteria for BP decoding.Simulation results show that the proposed hybrid technique reduces the iterations by about 90%of BP decoding in a high Signal-to-Noise Ratio(SNR)(i.e.,3.5~4 dB),and approaches the error correction performance of G-matrix and conventional BP decoder for polar codes.展开更多
The statistical physics properties of low-density parity-cheek codes for the binary symmetric channel are investigated as a spin glass problem with multi-spin interactions and quenched random fields by the cavity meth...The statistical physics properties of low-density parity-cheek codes for the binary symmetric channel are investigated as a spin glass problem with multi-spin interactions and quenched random fields by the cavity method. By evaluating the entropy function at the Nishimori temperature, we find that irregular constructions with heterogeneous degree distribution of check (bit) nodes have higher decoding thresholds compared to regular counterparts with homo- geneous degree distribution. We also show that the instability of the mean-field caiculation takes place only after the entropy crisis, suggesting the presence of a frozen glassy phase at low temperatures. When no prior knowledge of channel noise is assumed (searching for the ground state), we find that a reinforced strategy on normal belief propagation will boost the decoding threshold to a higher value than the normal belief propagation. This value is dose to the dynamicai transition where all local search heuristics fail to identify the true message (codeword or the ferromagnetic state). After the dynamical transition, the number of metastable states with larger energy density (than the ferromagnetic state) becomes exponentially numerous. When the noise level of the transmission channel approaches the static transition point, there starts to exist exponentiaily numerous codewords sharing the identical ferromagnetic energy.展开更多
Intercepted signal blind separation is a research topic with high importance for both military and civilian communication systems. A blind separation method for space-time block code (STBC) systems is proposed by us...Intercepted signal blind separation is a research topic with high importance for both military and civilian communication systems. A blind separation method for space-time block code (STBC) systems is proposed by using the ordinary independent component analysis (ICA). This method cannot work when specific complex modulations are employed since the assumption of mutual independence cannot be satisfied. The analysis shows that source signals, which are group-wise independent and use multi-dimensional ICA (MICA) instead of ordinary ICA, can be applied in this case. Utilizing the block-diagonal structure of the cumulant matrices, the JADE algorithm is generalized to the multidimensional case to separate the received data into mutually independent groups. Compared with ordinary ICA algorithms, the proposed method does not introduce additional ambiguities. Simulations show that the proposed method overcomes the drawback and achieves a better performance without utilizing coding information than channel estimation based algorithms.展开更多
By applying a result from geometric Goppa codes, due to H.Schtenoth, the true dimension of certain alternant codes is calculated. The results lead in many cases to an improvement of the usual lower bound for the dimen...By applying a result from geometric Goppa codes, due to H.Schtenoth, the true dimension of certain alternant codes is calculated. The results lead in many cases to an improvement of the usual lower bound for the dimension.展开更多
This paper extends the class of Low-Density Parity-Check (LDPC) codes that can be constructed from shifted identity matrices. To construct regular LDPC codes, a new method is proposed. Two simple inequations are adopt...This paper extends the class of Low-Density Parity-Check (LDPC) codes that can be constructed from shifted identity matrices. To construct regular LDPC codes, a new method is proposed. Two simple inequations are adopted to avoid the short cycles in Tanner graph, which makes the girth of Tanner graphs at least 8. Because their parity-check matrices are made up of circulant matrices, the new codes are quasi-cyclic codes. They perform well with iterative decoding.展开更多
In this paper, only narrow-sense primitive BCH codes over GF(q) are considered. A formula, that can be used in many cases, is first presented for computing the dimension of BCH codes. It improves the result given by M...In this paper, only narrow-sense primitive BCH codes over GF(q) are considered. A formula, that can be used in many cases, is first presented for computing the dimension of BCH codes. It improves the result given by MacWilliams and Sloane in 1977. A new method for finding the dimension of all types of BCH codes is proposed. In second part, it is proved that the BCH bound is the leader of some cyclotomic coset, and we guess that the minimum distance for any BCH code is also the leader of some cyclotomic coset.展开更多
Low-density parity-check (LDPC) codes were first presented by Gallager in 1962. They are linear block codes and their bit error rate (BER) performance approaches remarkably close to the Shannon limit. The LDPC cod...Low-density parity-check (LDPC) codes were first presented by Gallager in 1962. They are linear block codes and their bit error rate (BER) performance approaches remarkably close to the Shannon limit. The LDPC codes created much interest after the rediscovery by Mackay and Neal in 1995. This paper introduces some new LDPC codes by considering some combinatorial structures. We present regular LDPC codes based on group divisible designs which have Tanner graphs free of four-cycles.展开更多
This paper presents a matrix permuting approach to the construction of Low-Density Parity-Check (LDPC) code. It investigates the structure of the sparse parity-check matrix defined by Gallager. It is discovered that t...This paper presents a matrix permuting approach to the construction of Low-Density Parity-Check (LDPC) code. It investigates the structure of the sparse parity-check matrix defined by Gallager. It is discovered that the problem of constructing the sparse parity-check matrix requires an algorithm that is efficient in search environments and also is able to work with constraint satisfaction problem. The definition of Q-matrix is given, and it is found that the queen algorithm enables to search the Q-matrix. With properly permuting Q-matrix as sub-matrix, the sparse parity-check matrix which satisfied constraint condition is created, and the good regular-LDPC code that is called the Q-matrix LDPC code is generated. The result of this paper is significant not only for designing low complexity encoder, improving performance and reducing complexity of iterative decoding arithmetic, but also for building practical system of encodable and decodable LDPC code.展开更多
Neuronal ensemble activity codes working memory.In this work,we developed a neuronal ensemble sparse coding method,which can effectively reduce the dimension of the neuronal activity and express neural coding.Multicha...Neuronal ensemble activity codes working memory.In this work,we developed a neuronal ensemble sparse coding method,which can effectively reduce the dimension of the neuronal activity and express neural coding.Multichannel spike trains were recorded in rat prefrontal cortex during a work memory task in Y-maze.As discretesignals,spikes were transferred into cont inuous signals by estinating entropy.Then the normalized continuous signals were decomposed via non-negative sparse met hod.The non-negative components were extracted to reconstruct a low-dimensional ensemble,while none of the feature components were missed.The results showed that,for well-trained rats,neuronal ensemble activities in the prefrontal cortex changed dynamically during the.working memory task.And the neuronal ensemble is more explicit via using non-negative sparse coding.Our results indicate that the neuronal ensemblesparse coding method can effectively reduce the dimnension of neuronal activity and it is a useful tool to express neural coding.展开更多
This paper studies the nonsystematic Low-Density Parity-Check(LDPC)codes based onSymmetric Balanced Incomplete Block Design(SBIBD).First,it is concluded that the performancedegradation of nonsystematic linear block co...This paper studies the nonsystematic Low-Density Parity-Check(LDPC)codes based onSymmetric Balanced Incomplete Block Design(SBIBD).First,it is concluded that the performancedegradation of nonsystematic linear block codes is bounded by the average row weight of generalizedinverses of their generator matrices and code rate.Then a class of nonsystematic LDPC codes con-structed based on SBIBD is presented.Their characteristics include:both generator matrices andparity-check matrices are sparse and cyclic,which are simple to encode and decode;and almost arbi-trary rate codes can be easily constructed,so they are rate-compatible codes.Because there aresparse generalized inverses of generator matrices,the performance of the proposed codes is only0.15dB away from that of the traditional systematic LDPC codes.展开更多
A low-complexity algorithm is proposed in this paper in order to optimize irregular low-density parity-check (LDPC) codes.The algorithm proposed can calculate the noise threshold by means of a one-dimensional densit...A low-complexity algorithm is proposed in this paper in order to optimize irregular low-density parity-check (LDPC) codes.The algorithm proposed can calculate the noise threshold by means of a one-dimensional density evolution and search the optimal degree profiles with fast-convergence differential evolution,so that it has a lower complexity and a faster convergence speed.Simulation resuits show that the irregular LDPC codes optimized by the presented algorithm can also perform better than Turbo codes at moderate block length even with less computation cost.展开更多
In this paper, we conclude five kinds of methods for construction of the regular low-density parity matrix H and three kinds of methods for the construction of irregular low-density parity-check matrix H. Through the ...In this paper, we conclude five kinds of methods for construction of the regular low-density parity matrix H and three kinds of methods for the construction of irregular low-density parity-check matrix H. Through the analysis of the code rate and parameters of these eight kinds of structures, we find that the construction of low-density parity-check matrix tends to be more flexible and the parameter variability is enhanced. We propose that the current development cost should be lower with the progress of electronic technology and we need research on more practical Low-Density Parity-Check Codes (LDPC). Combined with the application of the quantum distribution key, we urgently need to explore the research direction of relevant theories and technologies of LDPC codes in other fields of quantum information in the future.展开更多
Space-time coding radar has been recently proposed and investigated.It is a radar framework which can perform transmit beamforming at the receiver.However,the range resolution decreases when the number of the transmit...Space-time coding radar has been recently proposed and investigated.It is a radar framework which can perform transmit beamforming at the receiver.However,the range resolution decreases when the number of the transmit element increases.A subarray-based space-time coding(sub-STC)radar is explored to alleviate the range resolution reduction.For the proposed radar configuration,an identical waveform is transmitted and it introduces a small time offset in different subarrays.The multidimensional ambiguity function of sub-STC radar is defined by considering resolutions in multiple domains including the range,Doppler,angle and probing direction.Analyses on properties of the multi-dimensional ambiguity function of the sub-STC radar with regard to the spatial coverage,resolution performance and low sidelobes are also given.Results reveal that the range resolution and low sidelobes performance are improved with the proposed approach.展开更多
文摘In this paper, we propose to generalize the coding schemes first proposed by Kozic &al to high spectral efficient modulation schemes. We study at first Chaos Coded Modulation based on the use of small dimensional modulo-MAP encoding process and we give a solution to study the distance spectrum of such coding schemes to accurately predict their performances. However, the obtained performances are quite poor. To improve them, we use then a high dimensional modulo-MAP mapping process similar to the low-density generator-matrix codes (LDGM) introduced by Kozic &al. The main difference with their work is that we use an encoding and decoding process on GF (2m) which enables to obtain better performances while preserving a quite simple decoding algorithm when we use the Extended Min-Sum (EMS) algorithm of Declercq &Fossorier.
文摘The progressive edge-growth(PEG)al-gorithm is a general method to construct short low-density parity-check(LDPC)codes and it is a greedy method to place each edge with large girths.In order to improve the performance of LDPC codes,many im-proved PEG(IPEG)algorithms employ multi metrics to select surviving edges in turn.In this paper,the pro-posed edges metric(EM)based on message-passing algorithm(MPA)is introduced to PEG algorithm and the proposed EM constrained PEG(EM-PEG)algo-rithm mainly considers the independence of message passing from different nodes in Tanner graph.The numerical results show that our EM-PEG algorithm brings better bit error rate(BER)performance gains to LDPC codes than the traditional PEG algorithm and the powerful multi-edge multi-metric constrained PEG algorithm(MM-PEGA)proposed recently.In ad-dition,the multi-edge EM constrained PEG(M-EM-PEG)algorithm which adopts multi-edge EM may fur-ther improve the BER performance.
基金the National Natural Science Foundation of China(Nos.61401164,61471131 and 61201145)the Natural Science Foundation of Guangdong Province(No.2014A030310308)
文摘In this paper,a family of rate-compatible(RC) low-density parity-check(LDPC) convolutional codes can be obtained from RC-LDPC block codes by graph extension method.The resulted RC-LDPC convolutional codes,which are derived by permuting the matrices of the corresponding RC-LDPC block codes,are systematic and have maximum encoding memory.Simulation results show that the proposed RC-LDPC convolutional codes with belief propagation(BP) decoding collectively offer a steady improvement on performance compared with the block counterparts over the binary-input additive white Gaussian noise channels(BI-AWGNCs).
基金Supported by the National Natural Science Foundation of China(61071221,10831002)
文摘Let Ф(u ×v, k, Aa, Ac) be the largest possible number of codewords among all two- dimensional (u ×v, k, λa, λc) optical orthogonal codes. A 2-D (u× v, k, λa, λ)-OOC with Ф(u× v, k, λa, λc) codewords is said to be maximum. In this paper, the number of codewords of a maximum 2-D (u × v, 4, 1, 3)-OOC has been determined.
文摘Coupling with a three dimensional (3D) hydrodynamic model and a suspended solids model, a 3D model for the transport of Fe and Mn in Arha Reservoir, China, was developed. The 3D velocity fields for the flood season are computed to drive the 3D model of Fe and Mn in which the processes of advection, diffusion, redox, sorption, desorption, deposition, and re suspension are included. The model has been calibrated by matching observed fluid, suspended solids, and total concentrations of Fe and Mn in the water column and in the sediment, successively. The model simulated both horizontal and vertical gradients of Fe and Mn in Arha Reservoir. It was found that Fe and especially Mn stratify in accordance with the stratification of DO during summer. The redox cycles across the water sediment interface has a principal role in the rise of Fe and Mn concentrations in the overlying water. It was also found that Fe and Mn loadings from the tributaries have a carryover effect on the water quality through a secondary contamination in the reservoir.
基金Sponsored by the Key Projects of National Natural Science Foundation of China (Grant No.51038004)the Western China Communications Construction and Technology Project (Grant No.2009318000078)
文摘In this paper a series of digital image processing methods were adopted for getting separated coarse aggregates from asphalt mixture specimen using high-resolution X-ray Computed Tomography (CT) images.The existing three dimensional (3D) particles matching methods based on two dimensional (2D) continuous cross-sections were analyzed and a new 'overlap area method' was presented.After the 3D particles were extracted one by one successfully,the basic parameters of each aggregate:perimeter,area,surface area,and volume were calculated by chain code method.Finally,the 3D mass center coordinates and the sphericity index were introduced.
基金This work is partially supported by the National Key Research and Development Project under Grant 2018YFB1802402.
文摘The error correction performance of Belief Propagation(BP)decoding for polar codes is satisfactory compared with the Successive Cancellation(SC)decoding.Nevertheless,it has to complete a fixed number of iterations,which results in high computational complexity.This necessitates an intelligent identification of successful BP decoding for early termination of the decoding process to avoid unnecessary iterations and minimize the computational complexity of BP decoding.This paper proposes a hybrid technique that combines the“paritycheck”with the“G-matrix”to reduce the computational complexity of BP decoder for polar codes.The proposed hybrid technique takes advantage of the parity-check to intelligently identify the valid codeword at an early stage and terminate the BP decoding process,which minimizes the overhead of the G-matrix and reduces the computational complexity of BP decoding.We explore a detailed mechanism incorporating the parity bits as outer code and prove that the proposed hybrid technique minimizes the computational complexity while preserving the BP error correction performance.Moreover,mathematical formulation for the proposed hybrid technique that minimizes the computation cost of the G-matrix is elaborated.The performance of the proposed hybrid technique is validated by comparing it with the state-of-the-art early stopping criteria for BP decoding.Simulation results show that the proposed hybrid technique reduces the iterations by about 90%of BP decoding in a high Signal-to-Noise Ratio(SNR)(i.e.,3.5~4 dB),and approaches the error correction performance of G-matrix and conventional BP decoder for polar codes.
基金Supported by the JSPS Fellowship for Foreign Researchers under Grant No.24.02049
文摘The statistical physics properties of low-density parity-cheek codes for the binary symmetric channel are investigated as a spin glass problem with multi-spin interactions and quenched random fields by the cavity method. By evaluating the entropy function at the Nishimori temperature, we find that irregular constructions with heterogeneous degree distribution of check (bit) nodes have higher decoding thresholds compared to regular counterparts with homo- geneous degree distribution. We also show that the instability of the mean-field caiculation takes place only after the entropy crisis, suggesting the presence of a frozen glassy phase at low temperatures. When no prior knowledge of channel noise is assumed (searching for the ground state), we find that a reinforced strategy on normal belief propagation will boost the decoding threshold to a higher value than the normal belief propagation. This value is dose to the dynamicai transition where all local search heuristics fail to identify the true message (codeword or the ferromagnetic state). After the dynamical transition, the number of metastable states with larger energy density (than the ferromagnetic state) becomes exponentially numerous. When the noise level of the transmission channel approaches the static transition point, there starts to exist exponentiaily numerous codewords sharing the identical ferromagnetic energy.
基金supported by the National Natural Science Foundation of China (61201282)
文摘Intercepted signal blind separation is a research topic with high importance for both military and civilian communication systems. A blind separation method for space-time block code (STBC) systems is proposed by using the ordinary independent component analysis (ICA). This method cannot work when specific complex modulations are employed since the assumption of mutual independence cannot be satisfied. The analysis shows that source signals, which are group-wise independent and use multi-dimensional ICA (MICA) instead of ordinary ICA, can be applied in this case. Utilizing the block-diagonal structure of the cumulant matrices, the JADE algorithm is generalized to the multidimensional case to separate the received data into mutually independent groups. Compared with ordinary ICA algorithms, the proposed method does not introduce additional ambiguities. Simulations show that the proposed method overcomes the drawback and achieves a better performance without utilizing coding information than channel estimation based algorithms.
基金Supported by the National Natural Science Foundation of China(No.69872016)
文摘By applying a result from geometric Goppa codes, due to H.Schtenoth, the true dimension of certain alternant codes is calculated. The results lead in many cases to an improvement of the usual lower bound for the dimension.
基金Supported by the Key Project of National Nature Science Foundation of China(No.60390540)
文摘This paper extends the class of Low-Density Parity-Check (LDPC) codes that can be constructed from shifted identity matrices. To construct regular LDPC codes, a new method is proposed. Two simple inequations are adopted to avoid the short cycles in Tanner graph, which makes the girth of Tanner graphs at least 8. Because their parity-check matrices are made up of circulant matrices, the new codes are quasi-cyclic codes. They perform well with iterative decoding.
文摘In this paper, only narrow-sense primitive BCH codes over GF(q) are considered. A formula, that can be used in many cases, is first presented for computing the dimension of BCH codes. It improves the result given by MacWilliams and Sloane in 1977. A new method for finding the dimension of all types of BCH codes is proposed. In second part, it is proved that the BCH bound is the leader of some cyclotomic coset, and we guess that the minimum distance for any BCH code is also the leader of some cyclotomic coset.
基金Supported by the National Natural Science Foundation of China(Grant Nos.1107105611201114)
文摘Low-density parity-check (LDPC) codes were first presented by Gallager in 1962. They are linear block codes and their bit error rate (BER) performance approaches remarkably close to the Shannon limit. The LDPC codes created much interest after the rediscovery by Mackay and Neal in 1995. This paper introduces some new LDPC codes by considering some combinatorial structures. We present regular LDPC codes based on group divisible designs which have Tanner graphs free of four-cycles.
基金Supported by the National Natural Science Foundation of China (No.60572050)by the National Science Foundation of Hubei Province (No.2004ABA049)
文摘This paper presents a matrix permuting approach to the construction of Low-Density Parity-Check (LDPC) code. It investigates the structure of the sparse parity-check matrix defined by Gallager. It is discovered that the problem of constructing the sparse parity-check matrix requires an algorithm that is efficient in search environments and also is able to work with constraint satisfaction problem. The definition of Q-matrix is given, and it is found that the queen algorithm enables to search the Q-matrix. With properly permuting Q-matrix as sub-matrix, the sparse parity-check matrix which satisfied constraint condition is created, and the good regular-LDPC code that is called the Q-matrix LDPC code is generated. The result of this paper is significant not only for designing low complexity encoder, improving performance and reducing complexity of iterative decoding arithmetic, but also for building practical system of encodable and decodable LDPC code.
基金supported by the National Natural Science Foundation of China(No.61074131,91132722)the Doctoral Fund of the Ministry of Education of China(20101202110007).
文摘Neuronal ensemble activity codes working memory.In this work,we developed a neuronal ensemble sparse coding method,which can effectively reduce the dimension of the neuronal activity and express neural coding.Multichannel spike trains were recorded in rat prefrontal cortex during a work memory task in Y-maze.As discretesignals,spikes were transferred into cont inuous signals by estinating entropy.Then the normalized continuous signals were decomposed via non-negative sparse met hod.The non-negative components were extracted to reconstruct a low-dimensional ensemble,while none of the feature components were missed.The results showed that,for well-trained rats,neuronal ensemble activities in the prefrontal cortex changed dynamically during the.working memory task.And the neuronal ensemble is more explicit via using non-negative sparse coding.Our results indicate that the neuronal ensemblesparse coding method can effectively reduce the dimnension of neuronal activity and it is a useful tool to express neural coding.
基金the National Natural Science Foundation of China(No.60272009,No.60472045,and No.60496313).
文摘This paper studies the nonsystematic Low-Density Parity-Check(LDPC)codes based onSymmetric Balanced Incomplete Block Design(SBIBD).First,it is concluded that the performancedegradation of nonsystematic linear block codes is bounded by the average row weight of generalizedinverses of their generator matrices and code rate.Then a class of nonsystematic LDPC codes con-structed based on SBIBD is presented.Their characteristics include:both generator matrices andparity-check matrices are sparse and cyclic,which are simple to encode and decode;and almost arbi-trary rate codes can be easily constructed,so they are rate-compatible codes.Because there aresparse generalized inverses of generator matrices,the performance of the proposed codes is only0.15dB away from that of the traditional systematic LDPC codes.
基金Leading Academic Discipline Project of Shanghai Municipal Education Commission,China(No.J51801)Shanghai Second Polytechnic University Foundation,China(No.QD209008)Leading Academic Discipline Project of Shanghai Second Polytechnic University,China(No.XXKZD1302)
文摘A low-complexity algorithm is proposed in this paper in order to optimize irregular low-density parity-check (LDPC) codes.The algorithm proposed can calculate the noise threshold by means of a one-dimensional density evolution and search the optimal degree profiles with fast-convergence differential evolution,so that it has a lower complexity and a faster convergence speed.Simulation resuits show that the irregular LDPC codes optimized by the presented algorithm can also perform better than Turbo codes at moderate block length even with less computation cost.
文摘In this paper, we conclude five kinds of methods for construction of the regular low-density parity matrix H and three kinds of methods for the construction of irregular low-density parity-check matrix H. Through the analysis of the code rate and parameters of these eight kinds of structures, we find that the construction of low-density parity-check matrix tends to be more flexible and the parameter variability is enhanced. We propose that the current development cost should be lower with the progress of electronic technology and we need research on more practical Low-Density Parity-Check Codes (LDPC). Combined with the application of the quantum distribution key, we urgently need to explore the research direction of relevant theories and technologies of LDPC codes in other fields of quantum information in the future.
基金supported by the National Key Research and Development Program of China(2016YFE0200400)the Key R&D Program of Shaanxi Province(2017KW-ZD-12)+1 种基金the Postdoctoral Science Foundation of Shaanxi Provincethe Nature Science Foundation of Shaanxi Province
文摘Space-time coding radar has been recently proposed and investigated.It is a radar framework which can perform transmit beamforming at the receiver.However,the range resolution decreases when the number of the transmit element increases.A subarray-based space-time coding(sub-STC)radar is explored to alleviate the range resolution reduction.For the proposed radar configuration,an identical waveform is transmitted and it introduces a small time offset in different subarrays.The multidimensional ambiguity function of sub-STC radar is defined by considering resolutions in multiple domains including the range,Doppler,angle and probing direction.Analyses on properties of the multi-dimensional ambiguity function of the sub-STC radar with regard to the spatial coverage,resolution performance and low sidelobes are also given.Results reveal that the range resolution and low sidelobes performance are improved with the proposed approach.