This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman codi...This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman coding and use it to compute the a priori source information which can be used when the channel environment is bad. The suggested scheme does not require changes on the transmitter side. Compared with separate decoding systems, the gain in signal to noise ratio is about 0 5-1.0 dB with a limi...展开更多
To improve the performance of the short interleaved serial concatenated convolutional code(SCCC) with low decoding iterative times, the structure of Log MAP algorithm is introduced into the conventional SOVA decoder...To improve the performance of the short interleaved serial concatenated convolutional code(SCCC) with low decoding iterative times, the structure of Log MAP algorithm is introduced into the conventional SOVA decoder to improve its performance at short interleaving delay. The combination of Log MAP and SOVA avoids updating the matrices of the maximum path, and also makes a contribution to the requirement of short delay. The simulation results of several SCCCs show that the improved decoder can obtain satisfied performance with short frame interleaver and it is suitable to the high bit rate low delay communication systems.展开更多
Walsh-Hadamard transform (WriT) can solve linear error equations on Field F2, and the method can be used to recover the parameters of convolutional code. However, solving the equations with many unknowns needs enorm...Walsh-Hadamard transform (WriT) can solve linear error equations on Field F2, and the method can be used to recover the parameters of convolutional code. However, solving the equations with many unknowns needs enormous computer memory which limits the application of WriT. In order to solve this problem, a method based on segmented WriT is proposed in this paper. The coefficient vector of high dimension is reshaped and two vectors of lower dimension are obtained. Then the WriT is operated and the requirement for computer memory is much reduced. The code rate and the constraint length of convolutional code are detected from the Walsh spectrum. And the check vector is recovered from the peak position. The validity of the method is verified by the simulation result, and the performance is proved to be optimal.展开更多
To address the issue of field size in random network coding, we propose an Improved Adaptive Random Convolutional Network Coding (IARCNC) algorithm to considerably reduce the amount of occupied memory. The operation o...To address the issue of field size in random network coding, we propose an Improved Adaptive Random Convolutional Network Coding (IARCNC) algorithm to considerably reduce the amount of occupied memory. The operation of IARCNC is similar to that of Adaptive Random Convolutional Network Coding (ARCNC), with the coefficients of local encoding kernels chosen uniformly at random over a small finite field. The difference is that the length of the local encoding kernels at the nodes used by IARCNC is constrained by the depth; meanwhile, increases until all the related sink nodes can be decoded. This restriction can make the code length distribution more reasonable. Therefore, IARCNC retains the advantages of ARCNC, such as a small decoding delay and partial adaptation to an unknown topology without an early estimation of the field size. In addition, it has its own advantage, that is, a higher reduction in memory use. The simulation and the example show the effectiveness of the proposed algorithm.展开更多
In this paper, we propose a new method to derive a family of regular rate-compatible low-density parity-check(RC-LDPC) convolutional codes from RC-LDPC block codes. In the RC-LDPC convolutional family, each extended...In this paper, we propose a new method to derive a family of regular rate-compatible low-density parity-check(RC-LDPC) convolutional codes from RC-LDPC block codes. In the RC-LDPC convolutional family, each extended sub-matrix of each extended code is obtained by choosing specified elements from two fixed matrices HE1K and HE1K, which are derived by modifying the extended matrices HE1 and HE2 of a systematic RC-LDPC block code. The proposed method which is based on graph extension simplifies the design, and prevent the defects caused by the puncturing method. It can be used to generate both regular and irregular RC-LDPC convolutional codes. All resulted codes in the family are systematic which simplify the encoder structure and have maximum encoding memories which ensure the property. Simulation results show the family collectively offer a steady improvement in performance with code compatibility over binary-input additive white Gaussian noise channel(BI-AWGNC).展开更多
In this paper,a family of rate-compatible(RC) low-density parity-check(LDPC) convolutional codes can be obtained from RC-LDPC block codes by graph extension method.The resulted RC-LDPC convolutional codes,which are de...In this paper,a family of rate-compatible(RC) low-density parity-check(LDPC) convolutional codes can be obtained from RC-LDPC block codes by graph extension method.The resulted RC-LDPC convolutional codes,which are derived by permuting the matrices of the corresponding RC-LDPC block codes,are systematic and have maximum encoding memory.Simulation results show that the proposed RC-LDPC convolutional codes with belief propagation(BP) decoding collectively offer a steady improvement on performance compared with the block counterparts over the binary-input additive white Gaussian noise channels(BI-AWGNCs).展开更多
Due to the strong background noise and the acquisition system noise,the useful characteristics are often difficult to be detected.To solve this problem,sparse coding captures a concise representation of the high-level...Due to the strong background noise and the acquisition system noise,the useful characteristics are often difficult to be detected.To solve this problem,sparse coding captures a concise representation of the high-level features in the signal using the underlying structure of the signal.Recently,an Online Convolutional Sparse Coding(OCSC)denoising algorithm has been proposed.However,it does not consider the structural characteristics of the signal,the sparsity of each iteration is not enough.Therefore,a threshold shrinkage algorithm considering neighborhood sparsity is proposed,and a training strategy from loose to tight is developed to further improve the denoising performance of the algorithm,called Variable Threshold Neighborhood Online Convolution Sparse Coding(VTNOCSC).By embedding the structural sparse threshold shrinkage operator into the process of solving the sparse coefficient and gradually approaching the optimal noise separation point in the training,the signal denoising performance of the algorithm is greatly improved.VTNOCSC is used to process the actual bearing fault signal,the noise interference is successfully reduced and the interest features are more evident.Compared with other existing methods,VTNOCSC has better denoising performance.展开更多
Quantum error correction technology is an important solution to solve the noise interference generated during the operation of quantum computers.In order to find the best syndrome of the stabilizer code in quantum err...Quantum error correction technology is an important solution to solve the noise interference generated during the operation of quantum computers.In order to find the best syndrome of the stabilizer code in quantum error correction,we need to find a fast and close to the optimal threshold decoder.In this work,we build a convolutional neural network(CNN)decoder to correct errors in the toric code based on the system research of machine learning.We analyze and optimize various conditions that affect CNN,and use the RestNet network architecture to reduce the running time.It is shortened by 30%-40%,and we finally design an optimized algorithm for CNN decoder.In this way,the threshold accuracy of the neural network decoder is made to reach 10.8%,which is closer to the optimal threshold of about 11%.The previous threshold of 8.9%-10.3%has been slightly improved,and there is no need to verify the basic noise.展开更多
An algebraic construction methodology is proposed to design binary time-invariant convolutional low-density parity-check(LDPC)codes.Assisted by a proposed partial search algorithm,the polynomialform parity-check matri...An algebraic construction methodology is proposed to design binary time-invariant convolutional low-density parity-check(LDPC)codes.Assisted by a proposed partial search algorithm,the polynomialform parity-check matrix of the time-invariant convolutional LDPC code is derived by combining some special codewords of an(n,2,n−1)code.The achieved convolutional LDPC codes possess the characteristics of comparatively large girth and given syndrome former memory.The objective of our design is to enable the time-invariant convolutional LDPC codes the advantages of excellent error performance and fast encoding.In particular,the error performance of the proposed convolutional LDPC code with small constraint length is superior to most existing convolutional LDPC codes.展开更多
To characterize the algebraic structure of wireless network coding, a hypergragh is utilized to model wireless packet networks from network layer. The algebraic description of random convolutional network coding is de...To characterize the algebraic structure of wireless network coding, a hypergragh is utilized to model wireless packet networks from network layer. The algebraic description of random convolutional network coding is deduced, and the coding condition is also presented. Analyses and simulations show that random convolutional coding is capacity-achieving with probability approaching 1.展开更多
Self-encoded spread spectrum eliminates the need for traditional pseudo noise (PN) code generators. In a self-encoded multiple access (SEMA) system, the number of users is not limited by the number of available sequen...Self-encoded spread spectrum eliminates the need for traditional pseudo noise (PN) code generators. In a self-encoded multiple access (SEMA) system, the number of users is not limited by the number of available sequences, unlike code division multiple access (CDMA) systems that employ PN codes such as m-, Gold or Kassami sequences. SEMA provides a convenient way of supporting multi-rate, multi-level grades of service in multimedia communications and prioritized heterogeneous networking systems. In this paper, we propose multiuser convolutional channel coding in SEMA that provides fewer cross-correlations among users and thereby reducing multiple access interference (MAI). We analyze SEMA multiuser convolutional coding in additive white Gaussian noise (AWGN) channels as well as fading channels. Our analysis includes downlink synchronous system as well as asynchronous system such as uplink mobile-to-base station communication.展开更多
A new convolutionally coded direct sequence (DS) CDMA system is proposed. The outputs of a convolutional encoder modulate multiple band-limited DS-CDMA waveforms. The receiver detects and combines signals for the desi...A new convolutionally coded direct sequence (DS) CDMA system is proposed. The outputs of a convolutional encoder modulate multiple band-limited DS-CDMA waveforms. The receiver detects and combines signals for the desired user and feeds a soft-decision Viterbi decoder. The performance of this system is compared to that of a convolutionally coded single carrier DS CDMA system with a Rake receiver. At roughly equivalent receiver complexity, results will demonstrate superior performance of the coded multicarrier system.展开更多
A new method to recover packet losses using (2,1,m) convolutional codes is proposed. The erasure correcting decoding algorithm and the decoding determinant theorem is presented. It is also proved that the codes with o...A new method to recover packet losses using (2,1,m) convolutional codes is proposed. The erasure correcting decoding algorithm and the decoding determinant theorem is presented. It is also proved that the codes with optimal distance profile have also optimal delay characteristic. Simulation results show that the proposed method can recover the packet losses more elliciently than RS codes over different decoding delay conditions and thus suits for different packet network delav conditions.展开更多
The paper introduces the state reduction algorithm and accelerated state reduction algorithm are used to compute the distance weight enumerator(transfer function)T[x,y] of convolutional codes.Next use computer simulat...The paper introduces the state reduction algorithm and accelerated state reduction algorithm are used to compute the distance weight enumerator(transfer function)T[x,y] of convolutional codes.Next use computer simulation to compare upper bound on the bit error probability on an additive white Gaussian noise(AWGN) for maximum free distance(MFD) codes of previously found and optimum distance spectrum(ODS) codes with rate 1/4,overall constraint length are 5 and 7,respectively.Finally,a method of how to search for good convolutional codes is given.展开更多
The paper introduces the state reduction algorithm and accelerated state reduction algorithm are used to compute the distance weight enumerator(transfer function) T[x,y] of convolutional codes.Next use computer simula...The paper introduces the state reduction algorithm and accelerated state reduction algorithm are used to compute the distance weight enumerator(transfer function) T[x,y] of convolutional codes.Next use computer simulation to compare upper bound on the bit error probability on an additive white Gaussian noise (AWGN) for maximum free distance (MFD) codes of previously found and optimum distance spectrum (ODS) codes with rate 1/4,overall constraint length are 5 and 7,respectively. Finally,a method of how to search for good convolutional codes is given.展开更多
The enhanced variable rate codec (EVRC) is a standard for the 'Speech ServiceOption 3 for Wideband Spread Spectrum Digital System,' which has been employed in both IS-95cellular systems and ANSI J-STC-008 PCS ...The enhanced variable rate codec (EVRC) is a standard for the 'Speech ServiceOption 3 for Wideband Spread Spectrum Digital System,' which has been employed in both IS-95cellular systems and ANSI J-STC-008 PCS (personal communications systems). This paper concentrateson channel decoders that exploit the residual redundancy inherent in the enhanced variable ratecodec bitstream. This residual redundancy is quantified by modeling the parameters as first orderMarkov chains and computing the entropy rate based on the relative frequencies of transitions.Moreover, this residual redundancy can be exploited by an appropriately 'tuned' channel decoder toprovide substantial coding gain when compared with the decoders that do not exploit it. Channelcoding schemes include convolutional codes, and iteratively decoded parallel concatenatedconvolutional 'turbo' codes.展开更多
Most of multimedia schemes employ variable-length codes (VLCs) like Huffman code as core components in obtaining high compression rates. However VLC methods are very sensitive to channel noise. The goal of this pape...Most of multimedia schemes employ variable-length codes (VLCs) like Huffman code as core components in obtaining high compression rates. However VLC methods are very sensitive to channel noise. The goal of this paper is to salvage as many data from the damaged packets as possible for higher audiovisual quality. This paper proposes an integrated joint source-channel decoder (I-JSCD) at a symbol-level using three-dimensional (3-D) trellis representation for first-order Markov sources encoded with VLC source code and convolutional channel code. This method combines source code and channel code state-spaces and bit-lengths to construct a two-dimensional (2-D) state-space, and then develops a 3-D trellis and a maximum a-posterior (MAP) algorithm to estimate the source sequence symbol by symbol. Experiment results demonstrate that our method results in significant improvement in decoding performance, it can salvage at least half of (50%) data in any channel error rate, and can provide additional error resilience to VLC stream like image, audio, video stream over high error rate links.展开更多
文摘This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman coding and use it to compute the a priori source information which can be used when the channel environment is bad. The suggested scheme does not require changes on the transmitter side. Compared with separate decoding systems, the gain in signal to noise ratio is about 0 5-1.0 dB with a limi...
文摘To improve the performance of the short interleaved serial concatenated convolutional code(SCCC) with low decoding iterative times, the structure of Log MAP algorithm is introduced into the conventional SOVA decoder to improve its performance at short interleaving delay. The combination of Log MAP and SOVA avoids updating the matrices of the maximum path, and also makes a contribution to the requirement of short delay. The simulation results of several SCCCs show that the improved decoder can obtain satisfied performance with short frame interleaver and it is suitable to the high bit rate low delay communication systems.
基金supported by the National Natural Science Foundation of China(1127105011371183+2 种基金61403036)the Science and Technology Development Foundation of CAEP(2013A04030202013B0403068)
基金supported by the National Natural Science Foundation of China(61072120)
文摘Walsh-Hadamard transform (WriT) can solve linear error equations on Field F2, and the method can be used to recover the parameters of convolutional code. However, solving the equations with many unknowns needs enormous computer memory which limits the application of WriT. In order to solve this problem, a method based on segmented WriT is proposed in this paper. The coefficient vector of high dimension is reshaped and two vectors of lower dimension are obtained. Then the WriT is operated and the requirement for computer memory is much reduced. The code rate and the constraint length of convolutional code are detected from the Walsh spectrum. And the check vector is recovered from the peak position. The validity of the method is verified by the simulation result, and the performance is proved to be optimal.
基金supported by the National Science Foundation (NSF) under Grants No.60832001,No.61271174 the National State Key Lab oratory of Integrated Service Network (ISN) under Grant No.ISN01080202
文摘To address the issue of field size in random network coding, we propose an Improved Adaptive Random Convolutional Network Coding (IARCNC) algorithm to considerably reduce the amount of occupied memory. The operation of IARCNC is similar to that of Adaptive Random Convolutional Network Coding (ARCNC), with the coefficients of local encoding kernels chosen uniformly at random over a small finite field. The difference is that the length of the local encoding kernels at the nodes used by IARCNC is constrained by the depth; meanwhile, increases until all the related sink nodes can be decoded. This restriction can make the code length distribution more reasonable. Therefore, IARCNC retains the advantages of ARCNC, such as a small decoding delay and partial adaptation to an unknown topology without an early estimation of the field size. In addition, it has its own advantage, that is, a higher reduction in memory use. The simulation and the example show the effectiveness of the proposed algorithm.
基金supported by the National Natural Science Foundation of China(No.61401164,No.61201145,No.61471175)the Natural Science Foundation of Guangdong Province of China(No.2014A030310308)the Supporting Plan for New Century Excellent Talents of the Ministry of Education(No.NCET-13-0805)
文摘In this paper, we propose a new method to derive a family of regular rate-compatible low-density parity-check(RC-LDPC) convolutional codes from RC-LDPC block codes. In the RC-LDPC convolutional family, each extended sub-matrix of each extended code is obtained by choosing specified elements from two fixed matrices HE1K and HE1K, which are derived by modifying the extended matrices HE1 and HE2 of a systematic RC-LDPC block code. The proposed method which is based on graph extension simplifies the design, and prevent the defects caused by the puncturing method. It can be used to generate both regular and irregular RC-LDPC convolutional codes. All resulted codes in the family are systematic which simplify the encoder structure and have maximum encoding memories which ensure the property. Simulation results show the family collectively offer a steady improvement in performance with code compatibility over binary-input additive white Gaussian noise channel(BI-AWGNC).
基金the National Natural Science Foundation of China(Nos.61401164,61471131 and 61201145)the Natural Science Foundation of Guangdong Province(No.2014A030310308)
文摘In this paper,a family of rate-compatible(RC) low-density parity-check(LDPC) convolutional codes can be obtained from RC-LDPC block codes by graph extension method.The resulted RC-LDPC convolutional codes,which are derived by permuting the matrices of the corresponding RC-LDPC block codes,are systematic and have maximum encoding memory.Simulation results show that the proposed RC-LDPC convolutional codes with belief propagation(BP) decoding collectively offer a steady improvement on performance compared with the block counterparts over the binary-input additive white Gaussian noise channels(BI-AWGNCs).
基金supported by the National Key Research and Development Program of China(No.2018YFB2003300)National Science and Technology Major Project,China(No.2017-IV-0008-0045)National Natural Science Foundation of China(No.51675262).
文摘Due to the strong background noise and the acquisition system noise,the useful characteristics are often difficult to be detected.To solve this problem,sparse coding captures a concise representation of the high-level features in the signal using the underlying structure of the signal.Recently,an Online Convolutional Sparse Coding(OCSC)denoising algorithm has been proposed.However,it does not consider the structural characteristics of the signal,the sparsity of each iteration is not enough.Therefore,a threshold shrinkage algorithm considering neighborhood sparsity is proposed,and a training strategy from loose to tight is developed to further improve the denoising performance of the algorithm,called Variable Threshold Neighborhood Online Convolution Sparse Coding(VTNOCSC).By embedding the structural sparse threshold shrinkage operator into the process of solving the sparse coefficient and gradually approaching the optimal noise separation point in the training,the signal denoising performance of the algorithm is greatly improved.VTNOCSC is used to process the actual bearing fault signal,the noise interference is successfully reduced and the interest features are more evident.Compared with other existing methods,VTNOCSC has better denoising performance.
基金the National Natural Science Foundation of China(Grant Nos.11975132 and 61772295)the Natural Science Foundation of Shandong Province,China(Grant No.ZR2019YQ01)the Project of Shandong Province Higher Educational Science and Technology Program,China(Grant No.J18KZ012).
文摘Quantum error correction technology is an important solution to solve the noise interference generated during the operation of quantum computers.In order to find the best syndrome of the stabilizer code in quantum error correction,we need to find a fast and close to the optimal threshold decoder.In this work,we build a convolutional neural network(CNN)decoder to correct errors in the toric code based on the system research of machine learning.We analyze and optimize various conditions that affect CNN,and use the RestNet network architecture to reduce the running time.It is shortened by 30%-40%,and we finally design an optimized algorithm for CNN decoder.In this way,the threshold accuracy of the neural network decoder is made to reach 10.8%,which is closer to the optimal threshold of about 11%.The previous threshold of 8.9%-10.3%has been slightly improved,and there is no need to verify the basic noise.
基金the National Natural Science Foundation of China(No.61401164)。
文摘An algebraic construction methodology is proposed to design binary time-invariant convolutional low-density parity-check(LDPC)codes.Assisted by a proposed partial search algorithm,the polynomialform parity-check matrix of the time-invariant convolutional LDPC code is derived by combining some special codewords of an(n,2,n−1)code.The achieved convolutional LDPC codes possess the characteristics of comparatively large girth and given syndrome former memory.The objective of our design is to enable the time-invariant convolutional LDPC codes the advantages of excellent error performance and fast encoding.In particular,the error performance of the proposed convolutional LDPC code with small constraint length is superior to most existing convolutional LDPC codes.
基金Supported by National Natural Science Foundation of China (No.61271174)Young Teachers' Innovation Foundation of Xidian University(K5051303137)
文摘To characterize the algebraic structure of wireless network coding, a hypergragh is utilized to model wireless packet networks from network layer. The algebraic description of random convolutional network coding is deduced, and the coding condition is also presented. Analyses and simulations show that random convolutional coding is capacity-achieving with probability approaching 1.
文摘Self-encoded spread spectrum eliminates the need for traditional pseudo noise (PN) code generators. In a self-encoded multiple access (SEMA) system, the number of users is not limited by the number of available sequences, unlike code division multiple access (CDMA) systems that employ PN codes such as m-, Gold or Kassami sequences. SEMA provides a convenient way of supporting multi-rate, multi-level grades of service in multimedia communications and prioritized heterogeneous networking systems. In this paper, we propose multiuser convolutional channel coding in SEMA that provides fewer cross-correlations among users and thereby reducing multiple access interference (MAI). We analyze SEMA multiuser convolutional coding in additive white Gaussian noise (AWGN) channels as well as fading channels. Our analysis includes downlink synchronous system as well as asynchronous system such as uplink mobile-to-base station communication.
基金Manuscript received February 13, 2016 accepted December 7, 2016. This work was supported by the National Natural Science Foundation of China (61362001, 61661031), Jiangxi Province Innovation Projects for Postgraduate Funds (YC2016-S006), the International Postdoctoral Exchange Fellowship Program, and Jiangxi Advanced Project for Post-Doctoral Research Fund (2014KY02).
文摘A new convolutionally coded direct sequence (DS) CDMA system is proposed. The outputs of a convolutional encoder modulate multiple band-limited DS-CDMA waveforms. The receiver detects and combines signals for the desired user and feeds a soft-decision Viterbi decoder. The performance of this system is compared to that of a convolutionally coded single carrier DS CDMA system with a Rake receiver. At roughly equivalent receiver complexity, results will demonstrate superior performance of the coded multicarrier system.
基金Supported by National Natural Science Foundation of China under Grant No.69896246
文摘A new method to recover packet losses using (2,1,m) convolutional codes is proposed. The erasure correcting decoding algorithm and the decoding determinant theorem is presented. It is also proved that the codes with optimal distance profile have also optimal delay characteristic. Simulation results show that the proposed method can recover the packet losses more elliciently than RS codes over different decoding delay conditions and thus suits for different packet network delav conditions.
文摘The paper introduces the state reduction algorithm and accelerated state reduction algorithm are used to compute the distance weight enumerator(transfer function)T[x,y] of convolutional codes.Next use computer simulation to compare upper bound on the bit error probability on an additive white Gaussian noise(AWGN) for maximum free distance(MFD) codes of previously found and optimum distance spectrum(ODS) codes with rate 1/4,overall constraint length are 5 and 7,respectively.Finally,a method of how to search for good convolutional codes is given.
文摘The paper introduces the state reduction algorithm and accelerated state reduction algorithm are used to compute the distance weight enumerator(transfer function) T[x,y] of convolutional codes.Next use computer simulation to compare upper bound on the bit error probability on an additive white Gaussian noise (AWGN) for maximum free distance (MFD) codes of previously found and optimum distance spectrum (ODS) codes with rate 1/4,overall constraint length are 5 and 7,respectively. Finally,a method of how to search for good convolutional codes is given.
文摘The enhanced variable rate codec (EVRC) is a standard for the 'Speech ServiceOption 3 for Wideband Spread Spectrum Digital System,' which has been employed in both IS-95cellular systems and ANSI J-STC-008 PCS (personal communications systems). This paper concentrateson channel decoders that exploit the residual redundancy inherent in the enhanced variable ratecodec bitstream. This residual redundancy is quantified by modeling the parameters as first orderMarkov chains and computing the entropy rate based on the relative frequencies of transitions.Moreover, this residual redundancy can be exploited by an appropriately 'tuned' channel decoder toprovide substantial coding gain when compared with the decoders that do not exploit it. Channelcoding schemes include convolutional codes, and iteratively decoded parallel concatenatedconvolutional 'turbo' codes.
基金Supported by the Foundation of Ministry of Education of China (211CERS10)
文摘Most of multimedia schemes employ variable-length codes (VLCs) like Huffman code as core components in obtaining high compression rates. However VLC methods are very sensitive to channel noise. The goal of this paper is to salvage as many data from the damaged packets as possible for higher audiovisual quality. This paper proposes an integrated joint source-channel decoder (I-JSCD) at a symbol-level using three-dimensional (3-D) trellis representation for first-order Markov sources encoded with VLC source code and convolutional channel code. This method combines source code and channel code state-spaces and bit-lengths to construct a two-dimensional (2-D) state-space, and then develops a 3-D trellis and a maximum a-posterior (MAP) algorithm to estimate the source sequence symbol by symbol. Experiment results demonstrate that our method results in significant improvement in decoding performance, it can salvage at least half of (50%) data in any channel error rate, and can provide additional error resilience to VLC stream like image, audio, video stream over high error rate links.