In this paper, a new kind of simple-encoding irregular systematic LDPC codes suitable for one-relay coded cooperation is designed, where the proposed joint iterative decoding is effectively performed in the destinatio...In this paper, a new kind of simple-encoding irregular systematic LDPC codes suitable for one-relay coded cooperation is designed, where the proposed joint iterative decoding is effectively performed in the destination which is in accordance with the corresponding joint Tanner graph characterizing two different component LDPC codes used by the source and relay in ideal and non-ideal relay cooperations. The theoretical analysis and simulations show that the coded cooperation scheme obviously outperforms the coded non-cooperation one under the same code rate and decoding complex. The significant performance improvement can be virtually credited to the additional mutual exchange of the extrinsic information resulted by the LDPC code employed by the source and its counterpart used by the relay in both ideal and non-ideal cooperations.展开更多
A novel Joint Source and Channel Decoding (JSCD) scheme for Variable Length Codes (VLCs) concatenated with turbo codes utilizing a new super-trellis decoding algorithm is presented in this letter. The basic idea of ou...A novel Joint Source and Channel Decoding (JSCD) scheme for Variable Length Codes (VLCs) concatenated with turbo codes utilizing a new super-trellis decoding algorithm is presented in this letter. The basic idea of our decoding algorithm is that source a priori information with the form of bit transition probabilities corresponding to the VLC tree can be derived directly from sub-state transitions in new composite-state represented super-trellis. A Maximum Likelihood (ML) decoding algorithm for VLC sequence estimations based on the proposed super-trellis is also described. Simu-lation results show that the new iterative decoding scheme can obtain obvious encoding gain especially for Reversible Variable Length Codes (RVLCs),when compared with the classical separated turbo decoding and the previous joint decoding not considering source statistical characteristics.展开更多
In this work, the homomorphism of the classic linear block code in linear network coding for the case of binary field and its extensions is studied. It is proved that the classic linear error-control block code is hom...In this work, the homomorphism of the classic linear block code in linear network coding for the case of binary field and its extensions is studied. It is proved that the classic linear error-control block code is homomorphic network error-control code in network coding. That is, if the source packets at the source node for a linear network coding are precoded using a linear block code, then every packet flowing in the network regarding to the source satisfies the same constraints as the source. As a consequence, error detection and correction can be performed at every intermediate nodes of multicast flow, rather than only at the destination node in the conventional way, which can help to identify and correct errors timely at the error-corrupted link and save the cost of forwarding error-corrupted data to the destination node when the intermediate nodes are ignorant of the errors. In addition, three examples are demonstrated which show that homomorphic linear code can be combined with homomorphic signature, McEliece public-key cryptosystem and unequal error protection respectively and thus have a great potential of practical utility.展开更多
Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbule...Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.展开更多
介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及...介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及工程意义进行了简单的介绍,以某工程设计项目中的实际结构为例,介绍了该方法的使用过程及注意事项,并结合压力容器工程设计领域的实际需求,对我国标准体系下一步的制定或修订方向提出了展望。展开更多
In this paper,a two-way relay system which achieves bi-directional communication via a multiple-antenna relay in two time slots is studied.In the multiple access(MA) phase,the novel receive schemes based on Dempster-S...In this paper,a two-way relay system which achieves bi-directional communication via a multiple-antenna relay in two time slots is studied.In the multiple access(MA) phase,the novel receive schemes based on Dempster-Shafer(D-S) evidence theory are proposed at the relay node.Instead of traditional linear detection,the first proposed MIMO-DS NC scheme adopts D-S evidence theory to detect the signals of each source node before mapping them into network-coded signal.Moreover,different from traditional physical-layer network coding(PNC) based on virtual MIMO model,the further proposed MIMO-DS PNC comes from the vector space perspective and combines PNC mapping with D-S theory to obtain network-coded signal without estimating each source node signal.D-S theory can appropriately characterize uncertainty and make full use of multiple evidence source information by Dempster's combination rule to obtain reliable decisions.In the broadcast(BC) phase,the space-time coding(STC) and antenna selection(AS) schemes are adopted to achieve transmit diversity.Simulation results reveal that the STC and AS schemes both achieve full transmit diversity in the BC phase and the proposed MIMO-DS NC/PNC schemes obtain better end-to-end BER performance and throughputs compared with traditional schemes with a little complexity increasing and no matter which scheme is adopted in the BC phase,MIMO-DS PNC always achieves full end-to-end diversity gain as MIMO-ML NC but with a lower complexity and its throughput approaches the throughput of MIMO-ML NC in high SNR regime.展开更多
Quantum error correction is a technique that enhances a system’s ability to combat noise by encoding logical information into additional quantum bits,which plays a key role in building practical quantum computers.The...Quantum error correction is a technique that enhances a system’s ability to combat noise by encoding logical information into additional quantum bits,which plays a key role in building practical quantum computers.The XZZX surface code,with only one stabilizer generator on each face,demonstrates significant application potential under biased noise.However,the existing minimum weight perfect matching(MWPM)algorithm has high computational complexity and lacks flexibility in large-scale systems.Therefore,this paper proposes a decoding method that combines graph neural networks(GNN)with multi-classifiers,the syndrome is transformed into an undirected graph,and the features are aggregated by convolutional layers,providing a more efficient and accurate decoding strategy.In the experiments,we evaluated the performance of the XZZX code under different biased noise conditions(bias=1,20,200)and different code distances(d=3,5,7,9,11).The experimental results show that under low bias noise(bias=1),the GNN decoder achieves a threshold of 0.18386,an improvement of approximately 19.12%compared to the MWPM decoder.Under high bias noise(bias=200),the GNN decoder reaches a threshold of 0.40542,improving by approximately 20.76%,overcoming the limitations of the conventional decoder.They demonstrate that the GNN decoding method exhibits superior performance and has broad application potential in the error correction of XZZX code.展开更多
Constituted by BCH component codes and its ordered statistics decoding(OSD),the successive cancellation list(SCL)decoding of U-UV structural codes can provide competent error-correction performance in the short-to-med...Constituted by BCH component codes and its ordered statistics decoding(OSD),the successive cancellation list(SCL)decoding of U-UV structural codes can provide competent error-correction performance in the short-to-medium length regime.However,this list decoding complexity becomes formidable as the decoding output list size increases.This is primarily incurred by the OSD.Addressing this challenge,this paper proposes the low complexity SCL decoding through reducing the complexity of component code decoding,and pruning the redundant SCL decoding paths.For the former,an efficient skipping rule is introduced for the OSD so that the higher order decoding can be skipped when they are not possible to provide a more likely codeword candidate.It is further extended to the OSD variant,the box-andmatch algorithm(BMA),in facilitating the component code decoding.Moreover,through estimating the correlation distance lower bounds(CDLBs)of the component code decoding outputs,a path pruning(PP)-SCL decoding is proposed to further facilitate the decoding of U-UV codes.In particular,its integration with the improved OSD and BMA is discussed.Simulation results show that significant complexity reduction can be achieved.Consequently,the U-UV codes can outperform the cyclic redundancy check(CRC)-polar codes with a similar decoding complexity.展开更多
Space laser communication(SLC)is an emerging technology to support high-throughput data transmissions in space networks.In this paper,to guarantee the reliability of high-speed SLC links,we aim at practical implementa...Space laser communication(SLC)is an emerging technology to support high-throughput data transmissions in space networks.In this paper,to guarantee the reliability of high-speed SLC links,we aim at practical implementation of low-density paritycheck(LDPC)decoding under resource-restricted space platforms.Particularly,due to the supply restriction and cost issues of high-speed on-board devices such as analog-to-digital converters(ADCs),the input of LDPC decoding will be usually constrained by hard-decision channel output.To tackle this challenge,density-evolution-based theoretical analysis is firstly performed to identify the cause of performance degradation in the conventional binaryinitialized iterative decoding(BIID)algorithm.Then,a computation-efficient decoding algorithm named multiary-initialized iterative decoding with early termination(MIID-ET)is proposed,which improves the error-correcting performance and computation efficiency by using a reliability-based initialization method and a threshold-based decoding termination rule.Finally,numerical simulations are conducted on example codes of rates 7/8 and 1/2 to evaluate the performance of different LDPC decoding algorithms,where the proposed MIID-ET outperforms the BIID with a coding gain of 0.38 dB and variable node calculation saving of 37%.With this advantage,the proposed MIID-ET can notably reduce LDPC decoder’s hardware implementation complexity under the same bit error rate performance,which successfully doubles the total throughput to 10 Gbps on a single-chip FPGA.展开更多
This paper demonstrates how channel coding can improve the robustness of spatial image watermarks against signal distortion caused by lossy data compression such as the JPEG scheme by taking advantage of the propertie...This paper demonstrates how channel coding can improve the robustness of spatial image watermarks against signal distortion caused by lossy data compression such as the JPEG scheme by taking advantage of the properties of Gray code. Two error-correction coding (ECC) schemes are used here: One scheme, referred to as the vertical ECC (VECC), is to encode information bits in a pixel by error-correction coding where the Gray code is used to improve the performance. The other scheme, referred to as the horizontal ECC (HECC), is to encode information bits in an image plane. In watermarking, HECC generates a codeword representing watermark bits, and each bit of the codeword is encoded by VECC. Simple single-error-correcting block codes are used in VECC and HECC. Several experiments of these schemes were conducted on test images. The result demonstrates that the error-correcting performance of HECC just depends on that of VECC, and accordingly, HECC enhances the capability of VECC. Consequently, HECC with appropriate codes can achieve stronger robustness to JPEG—caused distortions than non-channel-coding watermarking schemes.展开更多
In this paper, error-correction coding (ECC) in Gray codes is considered and its performance in the protecting of spatial image watermarks against lossy data compression is demonstrated. For this purpose, the differen...In this paper, error-correction coding (ECC) in Gray codes is considered and its performance in the protecting of spatial image watermarks against lossy data compression is demonstrated. For this purpose, the differences between bit patterns of two Gray codewords are analyzed in detail. On the basis of the properties, a method for encoding watermark bits in the Gray codewords that represent signal levels by a single-error-correcting (SEC) code is developed, which is referred to as the Gray-ECC method in this paper. The two codewords of the SEC code corresponding to respective watermark bits are determined so as to minimize the expected amount of distortion caused by the watermark embedding. The stochastic analyses show that an error-correcting capacity of the Gray-ECC method is superior to that of the ECC in natural binary codes for changes in signal codewords. Experiments of the Gray-ECC method were conducted on 8-bit monochrome images to evaluate both the features of watermarked images and the performance of robustness for image distortion resulting from the JPEG DCT-baseline coding scheme. The results demonstrate that, compared with a conventional averaging-based method, the Gray-ECC method yields watermarked images with less amount of signal distortion and also makes the watermark comparably robust for lossy data compression.展开更多
Low-Density Parity-Check (LDPC) code is one of the most exciting topics among the coding theory community.It is of great importance in both theory and practical communications over noisy channels.The most advantage of...Low-Density Parity-Check (LDPC) code is one of the most exciting topics among the coding theory community.It is of great importance in both theory and practical communications over noisy channels.The most advantage of LDPC codes is their relatively lower decoding complexity compared with turbo codes,while the disadvantage is its higher encoding complexity.In this paper,a new ap- proach is first proposed to construct high performance irregular systematic LDPC codes based on sparse generator matrix,which can significantly reduce the encoding complexity under the same de- coding complexity as that of regular or irregular LDPC codes defined by traditional sparse parity-check matrix.Then,the proposed generator-based systematic irregular LDPC codes are adopted as con- stituent block codes in rows and columns to design a new kind of product codes family,which also can be interpreted as irregular LDPC codes characterized by graph and thus decoded iteratively.Finally, the performance of the generator-based LDPC codes and the resultant product codes is investigated over an Additive White Gaussian Noise (AWGN) and also compared with the conventional LDPC codes under the same conditions of decoding complexity and channel noise.展开更多
To improve the decoding performance of quantum error-correcting codes in asymmetric noise channels,a neural network-based decoding algorithm for bias-tailored quantum codes is proposed.The algorithm consists of a bias...To improve the decoding performance of quantum error-correcting codes in asymmetric noise channels,a neural network-based decoding algorithm for bias-tailored quantum codes is proposed.The algorithm consists of a biased noise model,a neural belief propagation decoder,a convolutional optimization layer,and a multi-objective loss function.The biased noise model simulates asymmetric error generation,providing a training dataset for decoding.The neural network,leveraging dynamic weight learning and a multi-objective loss function,mitigates error degeneracy.Additionally,the convolutional optimization layer enhances early-stage convergence efficiency.Numerical results show that for bias-tailored quantum codes,our decoder performs much better than the belief propagation(BP)with ordered statistics decoding(BP+OSD).Our decoder achieves an order of magnitude improvement in the error suppression compared to higher-order BP+OSD.Furthermore,the decoding threshold of our decoder for surface codes reaches a high threshold of 20%.展开更多
基金Supported by the Open Research Fund of National Moblie Communications Research Laboratory of Southeast Uni-versity (No. W200704)
文摘In this paper, a new kind of simple-encoding irregular systematic LDPC codes suitable for one-relay coded cooperation is designed, where the proposed joint iterative decoding is effectively performed in the destination which is in accordance with the corresponding joint Tanner graph characterizing two different component LDPC codes used by the source and relay in ideal and non-ideal relay cooperations. The theoretical analysis and simulations show that the coded cooperation scheme obviously outperforms the coded non-cooperation one under the same code rate and decoding complex. The significant performance improvement can be virtually credited to the additional mutual exchange of the extrinsic information resulted by the LDPC code employed by the source and its counterpart used by the relay in both ideal and non-ideal cooperations.
基金Supported by the National Natural Science Foundation of China (No.90304003, No.60573112, No.60272056)the Foundation Project of China (No.A1320061262).
文摘A novel Joint Source and Channel Decoding (JSCD) scheme for Variable Length Codes (VLCs) concatenated with turbo codes utilizing a new super-trellis decoding algorithm is presented in this letter. The basic idea of our decoding algorithm is that source a priori information with the form of bit transition probabilities corresponding to the VLC tree can be derived directly from sub-state transitions in new composite-state represented super-trellis. A Maximum Likelihood (ML) decoding algorithm for VLC sequence estimations based on the proposed super-trellis is also described. Simu-lation results show that the new iterative decoding scheme can obtain obvious encoding gain especially for Reversible Variable Length Codes (RVLCs),when compared with the classical separated turbo decoding and the previous joint decoding not considering source statistical characteristics.
基金supported by Natural Science Foundation of China (No.61271258)
文摘In this work, the homomorphism of the classic linear block code in linear network coding for the case of binary field and its extensions is studied. It is proved that the classic linear error-control block code is homomorphic network error-control code in network coding. That is, if the source packets at the source node for a linear network coding are precoded using a linear block code, then every packet flowing in the network regarding to the source satisfies the same constraints as the source. As a consequence, error detection and correction can be performed at every intermediate nodes of multicast flow, rather than only at the destination node in the conventional way, which can help to identify and correct errors timely at the error-corrupted link and save the cost of forwarding error-corrupted data to the destination node when the intermediate nodes are ignorant of the errors. In addition, three examples are demonstrated which show that homomorphic linear code can be combined with homomorphic signature, McEliece public-key cryptosystem and unequal error protection respectively and thus have a great potential of practical utility.
基金supported by the National Natural Science Foundation of China(No.12104141).
文摘Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.
文摘介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及工程意义进行了简单的介绍,以某工程设计项目中的实际结构为例,介绍了该方法的使用过程及注意事项,并结合压力容器工程设计领域的实际需求,对我国标准体系下一步的制定或修订方向提出了展望。
基金jointly supported by the National Natural Science Foundation of China under Grant 61201198 and 61372089the Beijing Natural Science Foundation under Grant 4132015,4132007and 4132019
文摘In this paper,a two-way relay system which achieves bi-directional communication via a multiple-antenna relay in two time slots is studied.In the multiple access(MA) phase,the novel receive schemes based on Dempster-Shafer(D-S) evidence theory are proposed at the relay node.Instead of traditional linear detection,the first proposed MIMO-DS NC scheme adopts D-S evidence theory to detect the signals of each source node before mapping them into network-coded signal.Moreover,different from traditional physical-layer network coding(PNC) based on virtual MIMO model,the further proposed MIMO-DS PNC comes from the vector space perspective and combines PNC mapping with D-S theory to obtain network-coded signal without estimating each source node signal.D-S theory can appropriately characterize uncertainty and make full use of multiple evidence source information by Dempster's combination rule to obtain reliable decisions.In the broadcast(BC) phase,the space-time coding(STC) and antenna selection(AS) schemes are adopted to achieve transmit diversity.Simulation results reveal that the STC and AS schemes both achieve full transmit diversity in the BC phase and the proposed MIMO-DS NC/PNC schemes obtain better end-to-end BER performance and throughputs compared with traditional schemes with a little complexity increasing and no matter which scheme is adopted in the BC phase,MIMO-DS PNC always achieves full end-to-end diversity gain as MIMO-ML NC but with a lower complexity and its throughput approaches the throughput of MIMO-ML NC in high SNR regime.
基金supported by the Natural Science Foundation of Shandong Province,China(Grant No.ZR2021MF049)the Joint Fund of Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2022LL.Z012 and ZR2021LLZ001)the Key Research and Development Program of Shandong Province,China(Grant No.2023CXGC010901).
文摘Quantum error correction is a technique that enhances a system’s ability to combat noise by encoding logical information into additional quantum bits,which plays a key role in building practical quantum computers.The XZZX surface code,with only one stabilizer generator on each face,demonstrates significant application potential under biased noise.However,the existing minimum weight perfect matching(MWPM)algorithm has high computational complexity and lacks flexibility in large-scale systems.Therefore,this paper proposes a decoding method that combines graph neural networks(GNN)with multi-classifiers,the syndrome is transformed into an undirected graph,and the features are aggregated by convolutional layers,providing a more efficient and accurate decoding strategy.In the experiments,we evaluated the performance of the XZZX code under different biased noise conditions(bias=1,20,200)and different code distances(d=3,5,7,9,11).The experimental results show that under low bias noise(bias=1),the GNN decoder achieves a threshold of 0.18386,an improvement of approximately 19.12%compared to the MWPM decoder.Under high bias noise(bias=200),the GNN decoder reaches a threshold of 0.40542,improving by approximately 20.76%,overcoming the limitations of the conventional decoder.They demonstrate that the GNN decoding method exhibits superior performance and has broad application potential in the error correction of XZZX code.
基金supported by the National Natural Science Foundation of China(NSFC)with project ID 62071498the Guangdong National Science Foundation(GDNSF)with project ID 2024A1515010213.
文摘Constituted by BCH component codes and its ordered statistics decoding(OSD),the successive cancellation list(SCL)decoding of U-UV structural codes can provide competent error-correction performance in the short-to-medium length regime.However,this list decoding complexity becomes formidable as the decoding output list size increases.This is primarily incurred by the OSD.Addressing this challenge,this paper proposes the low complexity SCL decoding through reducing the complexity of component code decoding,and pruning the redundant SCL decoding paths.For the former,an efficient skipping rule is introduced for the OSD so that the higher order decoding can be skipped when they are not possible to provide a more likely codeword candidate.It is further extended to the OSD variant,the box-andmatch algorithm(BMA),in facilitating the component code decoding.Moreover,through estimating the correlation distance lower bounds(CDLBs)of the component code decoding outputs,a path pruning(PP)-SCL decoding is proposed to further facilitate the decoding of U-UV codes.In particular,its integration with the improved OSD and BMA is discussed.Simulation results show that significant complexity reduction can be achieved.Consequently,the U-UV codes can outperform the cyclic redundancy check(CRC)-polar codes with a similar decoding complexity.
基金supported by the National Key R&D Program of China(Grant No.2022YFA1005000)the National Natural Science Foundation of China(Grant No.62101308 and 62025110).
文摘Space laser communication(SLC)is an emerging technology to support high-throughput data transmissions in space networks.In this paper,to guarantee the reliability of high-speed SLC links,we aim at practical implementation of low-density paritycheck(LDPC)decoding under resource-restricted space platforms.Particularly,due to the supply restriction and cost issues of high-speed on-board devices such as analog-to-digital converters(ADCs),the input of LDPC decoding will be usually constrained by hard-decision channel output.To tackle this challenge,density-evolution-based theoretical analysis is firstly performed to identify the cause of performance degradation in the conventional binaryinitialized iterative decoding(BIID)algorithm.Then,a computation-efficient decoding algorithm named multiary-initialized iterative decoding with early termination(MIID-ET)is proposed,which improves the error-correcting performance and computation efficiency by using a reliability-based initialization method and a threshold-based decoding termination rule.Finally,numerical simulations are conducted on example codes of rates 7/8 and 1/2 to evaluate the performance of different LDPC decoding algorithms,where the proposed MIID-ET outperforms the BIID with a coding gain of 0.38 dB and variable node calculation saving of 37%.With this advantage,the proposed MIID-ET can notably reduce LDPC decoder’s hardware implementation complexity under the same bit error rate performance,which successfully doubles the total throughput to 10 Gbps on a single-chip FPGA.
文摘This paper demonstrates how channel coding can improve the robustness of spatial image watermarks against signal distortion caused by lossy data compression such as the JPEG scheme by taking advantage of the properties of Gray code. Two error-correction coding (ECC) schemes are used here: One scheme, referred to as the vertical ECC (VECC), is to encode information bits in a pixel by error-correction coding where the Gray code is used to improve the performance. The other scheme, referred to as the horizontal ECC (HECC), is to encode information bits in an image plane. In watermarking, HECC generates a codeword representing watermark bits, and each bit of the codeword is encoded by VECC. Simple single-error-correcting block codes are used in VECC and HECC. Several experiments of these schemes were conducted on test images. The result demonstrates that the error-correcting performance of HECC just depends on that of VECC, and accordingly, HECC enhances the capability of VECC. Consequently, HECC with appropriate codes can achieve stronger robustness to JPEG—caused distortions than non-channel-coding watermarking schemes.
文摘In this paper, error-correction coding (ECC) in Gray codes is considered and its performance in the protecting of spatial image watermarks against lossy data compression is demonstrated. For this purpose, the differences between bit patterns of two Gray codewords are analyzed in detail. On the basis of the properties, a method for encoding watermark bits in the Gray codewords that represent signal levels by a single-error-correcting (SEC) code is developed, which is referred to as the Gray-ECC method in this paper. The two codewords of the SEC code corresponding to respective watermark bits are determined so as to minimize the expected amount of distortion caused by the watermark embedding. The stochastic analyses show that an error-correcting capacity of the Gray-ECC method is superior to that of the ECC in natural binary codes for changes in signal codewords. Experiments of the Gray-ECC method were conducted on 8-bit monochrome images to evaluate both the features of watermarked images and the performance of robustness for image distortion resulting from the JPEG DCT-baseline coding scheme. The results demonstrate that, compared with a conventional averaging-based method, the Gray-ECC method yields watermarked images with less amount of signal distortion and also makes the watermark comparably robust for lossy data compression.
基金Supported by the National Aeronautical Foundation of Science and Research of China (No.04F52041)the Natural Science Foundation of Jiangsu Province (No.BK2006188).
文摘Low-Density Parity-Check (LDPC) code is one of the most exciting topics among the coding theory community.It is of great importance in both theory and practical communications over noisy channels.The most advantage of LDPC codes is their relatively lower decoding complexity compared with turbo codes,while the disadvantage is its higher encoding complexity.In this paper,a new ap- proach is first proposed to construct high performance irregular systematic LDPC codes based on sparse generator matrix,which can significantly reduce the encoding complexity under the same de- coding complexity as that of regular or irregular LDPC codes defined by traditional sparse parity-check matrix.Then,the proposed generator-based systematic irregular LDPC codes are adopted as con- stituent block codes in rows and columns to design a new kind of product codes family,which also can be interpreted as irregular LDPC codes characterized by graph and thus decoded iteratively.Finally, the performance of the generator-based LDPC codes and the resultant product codes is investigated over an Additive White Gaussian Noise (AWGN) and also compared with the conventional LDPC codes under the same conditions of decoding complexity and channel noise.
基金supported by the National Natural Science Foundation of China(Grant Nos.62371240,61802175,62401266,and 12201300)the National Key R&D Program of China(Grant No.2022YFB3103800)+2 种基金the Natural Science Foundation of Jiangsu Province(Grant No.BK20241452)the Fundamental Research Funds for the Central Universities(Grant No.30923011014)the fund of Laboratory for Advanced Computing and Intelligence Engineering(Grant No.2023-LYJJ-01-009)。
文摘To improve the decoding performance of quantum error-correcting codes in asymmetric noise channels,a neural network-based decoding algorithm for bias-tailored quantum codes is proposed.The algorithm consists of a biased noise model,a neural belief propagation decoder,a convolutional optimization layer,and a multi-objective loss function.The biased noise model simulates asymmetric error generation,providing a training dataset for decoding.The neural network,leveraging dynamic weight learning and a multi-objective loss function,mitigates error degeneracy.Additionally,the convolutional optimization layer enhances early-stage convergence efficiency.Numerical results show that for bias-tailored quantum codes,our decoder performs much better than the belief propagation(BP)with ordered statistics decoding(BP+OSD).Our decoder achieves an order of magnitude improvement in the error suppression compared to higher-order BP+OSD.Furthermore,the decoding threshold of our decoder for surface codes reaches a high threshold of 20%.