Using a quantum computer to simulate fermionic systems requires fermion-to-qubit transformations.Usually,lower Pauli weight of transformations means shallower quantum circuits.Therefore,most existing transformations a...Using a quantum computer to simulate fermionic systems requires fermion-to-qubit transformations.Usually,lower Pauli weight of transformations means shallower quantum circuits.Therefore,most existing transformations aim for lower Pauli weight.However,in some cases,the circuit depth depends not only on the Pauli weight but also on the coefficients of the Hamiltonian terms.In order to characterize the circuit depth of these algorithms,we propose a new metric called weighted Pauli weight,which depends on Pauli weight and coefficients of Hamiltonian terms.To achieve smaller weighted Pauli weight,we introduce a novel transformation,Huffman-code-based ternary tree(HTT)transformation,which is built upon the classical Huffman code and tailored to different Hamiltonians.We tested various molecular Hamiltonians and the results show that the weighted Pauli weight of the HTT transformation is smaller than that of commonly used mappings.At the same time,the HTT transformation also maintains a relatively small Pauli weight.The mapping we designed reduces the circuit depth of certain Hamiltonian simulation algorithms,facilitating faster simulation of fermionic systems.展开更多
Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbule...Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.展开更多
Quantum error correction is a technique that enhances a system’s ability to combat noise by encoding logical information into additional quantum bits,which plays a key role in building practical quantum computers.The...Quantum error correction is a technique that enhances a system’s ability to combat noise by encoding logical information into additional quantum bits,which plays a key role in building practical quantum computers.The XZZX surface code,with only one stabilizer generator on each face,demonstrates significant application potential under biased noise.However,the existing minimum weight perfect matching(MWPM)algorithm has high computational complexity and lacks flexibility in large-scale systems.Therefore,this paper proposes a decoding method that combines graph neural networks(GNN)with multi-classifiers,the syndrome is transformed into an undirected graph,and the features are aggregated by convolutional layers,providing a more efficient and accurate decoding strategy.In the experiments,we evaluated the performance of the XZZX code under different biased noise conditions(bias=1,20,200)and different code distances(d=3,5,7,9,11).The experimental results show that under low bias noise(bias=1),the GNN decoder achieves a threshold of 0.18386,an improvement of approximately 19.12%compared to the MWPM decoder.Under high bias noise(bias=200),the GNN decoder reaches a threshold of 0.40542,improving by approximately 20.76%,overcoming the limitations of the conventional decoder.They demonstrate that the GNN decoding method exhibits superior performance and has broad application potential in the error correction of XZZX code.展开更多
Constituted by BCH component codes and its ordered statistics decoding(OSD),the successive cancellation list(SCL)decoding of U-UV structural codes can provide competent error-correction performance in the short-to-med...Constituted by BCH component codes and its ordered statistics decoding(OSD),the successive cancellation list(SCL)decoding of U-UV structural codes can provide competent error-correction performance in the short-to-medium length regime.However,this list decoding complexity becomes formidable as the decoding output list size increases.This is primarily incurred by the OSD.Addressing this challenge,this paper proposes the low complexity SCL decoding through reducing the complexity of component code decoding,and pruning the redundant SCL decoding paths.For the former,an efficient skipping rule is introduced for the OSD so that the higher order decoding can be skipped when they are not possible to provide a more likely codeword candidate.It is further extended to the OSD variant,the box-andmatch algorithm(BMA),in facilitating the component code decoding.Moreover,through estimating the correlation distance lower bounds(CDLBs)of the component code decoding outputs,a path pruning(PP)-SCL decoding is proposed to further facilitate the decoding of U-UV codes.In particular,its integration with the improved OSD and BMA is discussed.Simulation results show that significant complexity reduction can be achieved.Consequently,the U-UV codes can outperform the cyclic redundancy check(CRC)-polar codes with a similar decoding complexity.展开更多
Space laser communication(SLC)is an emerging technology to support high-throughput data transmissions in space networks.In this paper,to guarantee the reliability of high-speed SLC links,we aim at practical implementa...Space laser communication(SLC)is an emerging technology to support high-throughput data transmissions in space networks.In this paper,to guarantee the reliability of high-speed SLC links,we aim at practical implementation of low-density paritycheck(LDPC)decoding under resource-restricted space platforms.Particularly,due to the supply restriction and cost issues of high-speed on-board devices such as analog-to-digital converters(ADCs),the input of LDPC decoding will be usually constrained by hard-decision channel output.To tackle this challenge,density-evolution-based theoretical analysis is firstly performed to identify the cause of performance degradation in the conventional binaryinitialized iterative decoding(BIID)algorithm.Then,a computation-efficient decoding algorithm named multiary-initialized iterative decoding with early termination(MIID-ET)is proposed,which improves the error-correcting performance and computation efficiency by using a reliability-based initialization method and a threshold-based decoding termination rule.Finally,numerical simulations are conducted on example codes of rates 7/8 and 1/2 to evaluate the performance of different LDPC decoding algorithms,where the proposed MIID-ET outperforms the BIID with a coding gain of 0.38 dB and variable node calculation saving of 37%.With this advantage,the proposed MIID-ET can notably reduce LDPC decoder’s hardware implementation complexity under the same bit error rate performance,which successfully doubles the total throughput to 10 Gbps on a single-chip FPGA.展开更多
Viruses circulating in small mammals possess the potential to infect humans.Tree shrews are a group of small mammals inhabiting widely in forests and plantations,but studies on viruses in tree shrews are quite limited...Viruses circulating in small mammals possess the potential to infect humans.Tree shrews are a group of small mammals inhabiting widely in forests and plantations,but studies on viruses in tree shrews are quite limited.Herein,viral metagenomic sequencing was employed to detect the virome in the tissue and swab samples from seventy-six tree shrews that we collected in Yunnan Province.As the results,genomic fragments belonging to eighteen viral families were identified,thirteen of which contain mammalian viruses.Through polymerase chain reaction(PCR)and Sanger sequencing,twelve complete genomes were determined,including five parvoviruses,three torque teno viruses(TTVs),two adenoviruses,one pneumovirus,and one hepacivirus,together with three partial genomes,including two hepatitis E viruses and one paramyxovirus.Notably,the three TTVs,named TSTTV-HNU1,TSTTV-HNU2,and TSTTV-HNU3,may compose a new genus within the family Anelloviridae.Notably,TSParvoV-HNU5,one of the tree shrew parvoviruses detected,was likely to be a recombination of two murine viruses.Divergence time estimation further revealed the potential cross-species-transmission history of the tree shrew pneumovirus TSPneV-HNU1.Our study provides a comprehensive exploration of viral diversity in wild tree shrews,significantly enhancing our understanding of their roles as natural virus reservoirs.展开更多
Forests play a critical role in mitigating cli-mate change by sequestering carbon,yet their responses to environmental shifts remain complex and multifaceted.This special issue,“Tree Rings,Forest Carbon Sink,and Clim...Forests play a critical role in mitigating cli-mate change by sequestering carbon,yet their responses to environmental shifts remain complex and multifaceted.This special issue,“Tree Rings,Forest Carbon Sink,and Climate Change,”compiles 41 interdisciplinary studies exploring forest-climate interactions through dendrochro-nological and ecological approaches.It addresses climate reconstruction(e.g.,temperature,precipitation,isotopes)using tree-ring proxies,species-specific and age-dependent growth responses to warming and drought,anatomical adap-tations,and methodological innovations in isotope analysis and multi-proxy integration.Key findings reveal ENSO/AMO modulation of historical climates,elevation-and latitude-driven variability in tree resilience,contrasting carbon dynamics under stress,and projected habitat shifts for vulnerable species.The issue underscores forests’dual role as climate archives and carbon regulators,offering insights for adaptive management and nature-based climate solutions.Contributions bridge micro-scale physiological processes to macro-scale ecological modeling,advancing sustainable strategies amid global environmental challenges.展开更多
To improve the decoding performance of quantum error-correcting codes in asymmetric noise channels,a neural network-based decoding algorithm for bias-tailored quantum codes is proposed.The algorithm consists of a bias...To improve the decoding performance of quantum error-correcting codes in asymmetric noise channels,a neural network-based decoding algorithm for bias-tailored quantum codes is proposed.The algorithm consists of a biased noise model,a neural belief propagation decoder,a convolutional optimization layer,and a multi-objective loss function.The biased noise model simulates asymmetric error generation,providing a training dataset for decoding.The neural network,leveraging dynamic weight learning and a multi-objective loss function,mitigates error degeneracy.Additionally,the convolutional optimization layer enhances early-stage convergence efficiency.Numerical results show that for bias-tailored quantum codes,our decoder performs much better than the belief propagation(BP)with ordered statistics decoding(BP+OSD).Our decoder achieves an order of magnitude improvement in the error suppression compared to higher-order BP+OSD.Furthermore,the decoding threshold of our decoder for surface codes reaches a high threshold of 20%.展开更多
This paper investigates the reliability of internal marine combustion engines using an integrated approach that combines Fault Tree Analysis(FTA)and Bayesian Networks(BN).FTA provides a structured,top-down method for ...This paper investigates the reliability of internal marine combustion engines using an integrated approach that combines Fault Tree Analysis(FTA)and Bayesian Networks(BN).FTA provides a structured,top-down method for identifying critical failure modes and their root causes,while BN introduces flexibility in probabilistic reasoning,enabling dynamic updates based on new evidence.This dual methodology overcomes the limitations of static FTA models,offering a comprehensive framework for system reliability analysis.Critical failures,including External Leakage(ELU),Failure to Start(FTS),and Overheating(OHE),were identified as key risks.By incorporating redundancy into high-risk components such as pumps and batteries,the likelihood of these failures was significantly reduced.For instance,redundant pumps reduced the probability of ELU by 31.88%,while additional batteries decreased the occurrence of FTS by 36.45%.The results underscore the practical benefits of combining FTA and BN for enhancing system reliability,particularly in maritime applications where operational safety and efficiency are critical.This research provides valuable insights for maintenance planning and highlights the importance of redundancy in critical systems,especially as the industry transitions toward more autonomous vessels.展开更多
Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Althoug...Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC.展开更多
In the context of modern software development characterized by increasing complexity and compressed development cycles,traditional static vulnerability detection methods face prominent challenges including high false ...In the context of modern software development characterized by increasing complexity and compressed development cycles,traditional static vulnerability detection methods face prominent challenges including high false positive rates and missed detections of complex logic due to their over-reliance on rule templates.This paper proposes a Syntax-Aware Hierarchical Attention Network(SAHAN)model,which achieves high-precision vulnerability detection through grammar-rule-driven multi-granularity code slicing and hierarchical semantic fusion mechanisms.The SAHAN model first generates Syntax Independent Units(SIUs),which slices the code based on Abstract Syntax Tree(AST)and predefined grammar rules,retaining vulnerability-sensitive contexts.Following this,through a hierarchical attention mechanism,the local syntax-aware layer encodes fine-grained patterns within SIUs,while the global semantic correlation layer captures vulnerability chains across SIUs,achieving synergistic modeling of syntax and semantics.Experiments show that on benchmark datasets like QEMU,SAHAN significantly improves detection performance by 4.8%to 13.1%on average compared to baseline models such as Devign and VulDeePecker.展开更多
This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman codi...This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman coding and use it to compute the a priori source information which can be used when the channel environment is bad. The suggested scheme does not require changes on the transmitter side. Compared with separate decoding systems, the gain in signal to noise ratio is about 0 5-1.0 dB with a limi...展开更多
This paper presents an efficient quadtree based fractal image coding scheme in wavelet transform domain based on the wavelet based theory of fractal image compression introduced by Davis. In the scheme, zerotrees of...This paper presents an efficient quadtree based fractal image coding scheme in wavelet transform domain based on the wavelet based theory of fractal image compression introduced by Davis. In the scheme, zerotrees of wavelet coefficients are used to reduce the number of domain blocks, which leads to lower bit cost required to represent the location information of fractal coding, and overall entropy constrained optimization is performed for the decision trees as well as for the sets of scalar quantizers and self quantizers of wavelet subtrees. Experiment results show that at the low bit rates, the proposed scheme gives about 1 dB improvement in PSNR over the reported results.展开更多
To reduce the time required to complete the regeneration process of erasure codes, we propose a Tree-structured Parallel Regeneration (TPR) scheme for multiple data losses in distributed storage systems. Under the sch...To reduce the time required to complete the regeneration process of erasure codes, we propose a Tree-structured Parallel Regeneration (TPR) scheme for multiple data losses in distributed storage systems. Under the scheme, two algorithms are proposed for the construction of multiple regeneration trees, namely the edge-disjoint algorithm and edge-sharing algorithm. The edge-disjoint algorithm constructs multiple independent trees, and is simple and appropriate for environments where newcomers and their providers are distributed over a large area and have few intersections. The edge-sharing algorithm constructs multiple trees that compete to utilize the bandwidth, and make a better utilization of the bandwidth, although it needs to measure the available band-width and deal with the bandwidth changes; it is therefore difficult to implement in practical systems. The parallel regeneration for multiple data losses of TPR primarily includes two optimizations: firstly, transferring the data through the bandwidth optimized-paths in a pipe-line manner; secondly, executing data regeneration over multiple trees in parallel. To evaluate the proposal, we implement an event-based simulator and make a detailed comparison with some popular regeneration methods. The quantitative comparison results show that the use of TPR employing either the edge-disjoint algorithm or edge-sharing algorithm reduces the regeneration time significantly.展开更多
Quasi-cyclic low-density parity-check (QC-LDPC) codes can be constructed conveniently by cyclic lifting of protographs. For the purpose of eliminating short cycles in the Tanner graph to guarantee performance, first...Quasi-cyclic low-density parity-check (QC-LDPC) codes can be constructed conveniently by cyclic lifting of protographs. For the purpose of eliminating short cycles in the Tanner graph to guarantee performance, first an algorithm to enumerate the harmful short cycles in the protograph is designed, and then a greedy algorithm is proposed to assign proper permutation shifts to the circulant permutation submatrices in the parity check matrix after lifting. Compared with the existing deterministic edge swapping (DES) algorithms, the proposed greedy algorithm adds more constraints in the assignment of permutation shifts to improve performance. Simulation results verify that it outperforms DES in reducing short cycles. In addition, it is proved that the parity check matrices of the cyclic lifted QC-LDPC codes can be transformed into block lower triangular ones when the lifting factor is a power of 2. Utilizing this property, the QC- LDPC codes can be encoded by preprocessing the base matrices, which reduces the encoding complexity to a large extent.展开更多
The enhanced variable rate codec (EVRC) is a standard for the 'Speech ServiceOption 3 for Wideband Spread Spectrum Digital System,' which has been employed in both IS-95cellular systems and ANSI J-STC-008 PCS ...The enhanced variable rate codec (EVRC) is a standard for the 'Speech ServiceOption 3 for Wideband Spread Spectrum Digital System,' which has been employed in both IS-95cellular systems and ANSI J-STC-008 PCS (personal communications systems). This paper concentrateson channel decoders that exploit the residual redundancy inherent in the enhanced variable ratecodec bitstream. This residual redundancy is quantified by modeling the parameters as first orderMarkov chains and computing the entropy rate based on the relative frequencies of transitions.Moreover, this residual redundancy can be exploited by an appropriately 'tuned' channel decoder toprovide substantial coding gain when compared with the decoders that do not exploit it. Channelcoding schemes include convolutional codes, and iteratively decoded parallel concatenatedconvolutional 'turbo' codes.展开更多
Low-density parity-check(LDPC)codes are widely used due to their significant errorcorrection capability and linear decoding complexity.However,it is not sufficient for LDPC codes to satisfy the ultra low bit error rat...Low-density parity-check(LDPC)codes are widely used due to their significant errorcorrection capability and linear decoding complexity.However,it is not sufficient for LDPC codes to satisfy the ultra low bit error rate(BER)requirement of next-generation ultra-high-speed communications due to the error floor phenomenon.According to the residual error characteristics of LDPC codes,we consider using the high rate Reed-Solomon(RS)codes as the outer codes to construct LDPC-RS product codes to eliminate the error floor and propose the hybrid error-erasure-correction decoding algorithm for the outer code to exploit erasure-correction capability effectively.Furthermore,the overall performance of product codes is improved using iteration between outer and inner codes.Simulation results validate that BER of the product code with the proposed hybrid algorithm is lower than that of the product code with no erasure correction.Compared with other product codes using LDPC codes,the proposed LDPC-RS product code with the same code rate has much better performance and smaller rate loss attributed to the maximum distance separable(MDS)property and significant erasure-correction capability of RS codes.展开更多
基金supported by the National Key Research and Development Program of China(Grant No.2024YFB4504101)the National Nat-ural Science Foundation of China(Grant No.22303022)the Anhui Province Innovation Plan for Science and Technology(Grant No.202423r06050002).
文摘Using a quantum computer to simulate fermionic systems requires fermion-to-qubit transformations.Usually,lower Pauli weight of transformations means shallower quantum circuits.Therefore,most existing transformations aim for lower Pauli weight.However,in some cases,the circuit depth depends not only on the Pauli weight but also on the coefficients of the Hamiltonian terms.In order to characterize the circuit depth of these algorithms,we propose a new metric called weighted Pauli weight,which depends on Pauli weight and coefficients of Hamiltonian terms.To achieve smaller weighted Pauli weight,we introduce a novel transformation,Huffman-code-based ternary tree(HTT)transformation,which is built upon the classical Huffman code and tailored to different Hamiltonians.We tested various molecular Hamiltonians and the results show that the weighted Pauli weight of the HTT transformation is smaller than that of commonly used mappings.At the same time,the HTT transformation also maintains a relatively small Pauli weight.The mapping we designed reduces the circuit depth of certain Hamiltonian simulation algorithms,facilitating faster simulation of fermionic systems.
基金supported by the National Natural Science Foundation of China(No.12104141).
文摘Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.
基金supported by the Natural Science Foundation of Shandong Province,China(Grant No.ZR2021MF049)the Joint Fund of Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2022LL.Z012 and ZR2021LLZ001)the Key Research and Development Program of Shandong Province,China(Grant No.2023CXGC010901).
文摘Quantum error correction is a technique that enhances a system’s ability to combat noise by encoding logical information into additional quantum bits,which plays a key role in building practical quantum computers.The XZZX surface code,with only one stabilizer generator on each face,demonstrates significant application potential under biased noise.However,the existing minimum weight perfect matching(MWPM)algorithm has high computational complexity and lacks flexibility in large-scale systems.Therefore,this paper proposes a decoding method that combines graph neural networks(GNN)with multi-classifiers,the syndrome is transformed into an undirected graph,and the features are aggregated by convolutional layers,providing a more efficient and accurate decoding strategy.In the experiments,we evaluated the performance of the XZZX code under different biased noise conditions(bias=1,20,200)and different code distances(d=3,5,7,9,11).The experimental results show that under low bias noise(bias=1),the GNN decoder achieves a threshold of 0.18386,an improvement of approximately 19.12%compared to the MWPM decoder.Under high bias noise(bias=200),the GNN decoder reaches a threshold of 0.40542,improving by approximately 20.76%,overcoming the limitations of the conventional decoder.They demonstrate that the GNN decoding method exhibits superior performance and has broad application potential in the error correction of XZZX code.
基金supported by the National Natural Science Foundation of China(NSFC)with project ID 62071498the Guangdong National Science Foundation(GDNSF)with project ID 2024A1515010213.
文摘Constituted by BCH component codes and its ordered statistics decoding(OSD),the successive cancellation list(SCL)decoding of U-UV structural codes can provide competent error-correction performance in the short-to-medium length regime.However,this list decoding complexity becomes formidable as the decoding output list size increases.This is primarily incurred by the OSD.Addressing this challenge,this paper proposes the low complexity SCL decoding through reducing the complexity of component code decoding,and pruning the redundant SCL decoding paths.For the former,an efficient skipping rule is introduced for the OSD so that the higher order decoding can be skipped when they are not possible to provide a more likely codeword candidate.It is further extended to the OSD variant,the box-andmatch algorithm(BMA),in facilitating the component code decoding.Moreover,through estimating the correlation distance lower bounds(CDLBs)of the component code decoding outputs,a path pruning(PP)-SCL decoding is proposed to further facilitate the decoding of U-UV codes.In particular,its integration with the improved OSD and BMA is discussed.Simulation results show that significant complexity reduction can be achieved.Consequently,the U-UV codes can outperform the cyclic redundancy check(CRC)-polar codes with a similar decoding complexity.
基金supported by the National Key R&D Program of China(Grant No.2022YFA1005000)the National Natural Science Foundation of China(Grant No.62101308 and 62025110).
文摘Space laser communication(SLC)is an emerging technology to support high-throughput data transmissions in space networks.In this paper,to guarantee the reliability of high-speed SLC links,we aim at practical implementation of low-density paritycheck(LDPC)decoding under resource-restricted space platforms.Particularly,due to the supply restriction and cost issues of high-speed on-board devices such as analog-to-digital converters(ADCs),the input of LDPC decoding will be usually constrained by hard-decision channel output.To tackle this challenge,density-evolution-based theoretical analysis is firstly performed to identify the cause of performance degradation in the conventional binaryinitialized iterative decoding(BIID)algorithm.Then,a computation-efficient decoding algorithm named multiary-initialized iterative decoding with early termination(MIID-ET)is proposed,which improves the error-correcting performance and computation efficiency by using a reliability-based initialization method and a threshold-based decoding termination rule.Finally,numerical simulations are conducted on example codes of rates 7/8 and 1/2 to evaluate the performance of different LDPC decoding algorithms,where the proposed MIID-ET outperforms the BIID with a coding gain of 0.38 dB and variable node calculation saving of 37%.With this advantage,the proposed MIID-ET can notably reduce LDPC decoder’s hardware implementation complexity under the same bit error rate performance,which successfully doubles the total throughput to 10 Gbps on a single-chip FPGA.
基金funded by the National Natural Science Foundation of China(No.U2002218)the Science and Technology Innovation Program of Hunan Province(2024RC1028)Hunan University(No.521119400156).
文摘Viruses circulating in small mammals possess the potential to infect humans.Tree shrews are a group of small mammals inhabiting widely in forests and plantations,but studies on viruses in tree shrews are quite limited.Herein,viral metagenomic sequencing was employed to detect the virome in the tissue and swab samples from seventy-six tree shrews that we collected in Yunnan Province.As the results,genomic fragments belonging to eighteen viral families were identified,thirteen of which contain mammalian viruses.Through polymerase chain reaction(PCR)and Sanger sequencing,twelve complete genomes were determined,including five parvoviruses,three torque teno viruses(TTVs),two adenoviruses,one pneumovirus,and one hepacivirus,together with three partial genomes,including two hepatitis E viruses and one paramyxovirus.Notably,the three TTVs,named TSTTV-HNU1,TSTTV-HNU2,and TSTTV-HNU3,may compose a new genus within the family Anelloviridae.Notably,TSParvoV-HNU5,one of the tree shrew parvoviruses detected,was likely to be a recombination of two murine viruses.Divergence time estimation further revealed the potential cross-species-transmission history of the tree shrew pneumovirus TSPneV-HNU1.Our study provides a comprehensive exploration of viral diversity in wild tree shrews,significantly enhancing our understanding of their roles as natural virus reservoirs.
基金supported by the Outstanding Action Plan of Chinese Sci-tech Journals(Grant No.OAP-C-077).
文摘Forests play a critical role in mitigating cli-mate change by sequestering carbon,yet their responses to environmental shifts remain complex and multifaceted.This special issue,“Tree Rings,Forest Carbon Sink,and Climate Change,”compiles 41 interdisciplinary studies exploring forest-climate interactions through dendrochro-nological and ecological approaches.It addresses climate reconstruction(e.g.,temperature,precipitation,isotopes)using tree-ring proxies,species-specific and age-dependent growth responses to warming and drought,anatomical adap-tations,and methodological innovations in isotope analysis and multi-proxy integration.Key findings reveal ENSO/AMO modulation of historical climates,elevation-and latitude-driven variability in tree resilience,contrasting carbon dynamics under stress,and projected habitat shifts for vulnerable species.The issue underscores forests’dual role as climate archives and carbon regulators,offering insights for adaptive management and nature-based climate solutions.Contributions bridge micro-scale physiological processes to macro-scale ecological modeling,advancing sustainable strategies amid global environmental challenges.
基金supported by the National Natural Science Foundation of China(Grant Nos.62371240,61802175,62401266,and 12201300)the National Key R&D Program of China(Grant No.2022YFB3103800)+2 种基金the Natural Science Foundation of Jiangsu Province(Grant No.BK20241452)the Fundamental Research Funds for the Central Universities(Grant No.30923011014)the fund of Laboratory for Advanced Computing and Intelligence Engineering(Grant No.2023-LYJJ-01-009)。
文摘To improve the decoding performance of quantum error-correcting codes in asymmetric noise channels,a neural network-based decoding algorithm for bias-tailored quantum codes is proposed.The algorithm consists of a biased noise model,a neural belief propagation decoder,a convolutional optimization layer,and a multi-objective loss function.The biased noise model simulates asymmetric error generation,providing a training dataset for decoding.The neural network,leveraging dynamic weight learning and a multi-objective loss function,mitigates error degeneracy.Additionally,the convolutional optimization layer enhances early-stage convergence efficiency.Numerical results show that for bias-tailored quantum codes,our decoder performs much better than the belief propagation(BP)with ordered statistics decoding(BP+OSD).Our decoder achieves an order of magnitude improvement in the error suppression compared to higher-order BP+OSD.Furthermore,the decoding threshold of our decoder for surface codes reaches a high threshold of 20%.
基金supported by Istanbul Technical University(Project No.45698)supported through the“Young Researchers’Career Development Project-training of doctoral students”of the Croatian Science Foundation.
文摘This paper investigates the reliability of internal marine combustion engines using an integrated approach that combines Fault Tree Analysis(FTA)and Bayesian Networks(BN).FTA provides a structured,top-down method for identifying critical failure modes and their root causes,while BN introduces flexibility in probabilistic reasoning,enabling dynamic updates based on new evidence.This dual methodology overcomes the limitations of static FTA models,offering a comprehensive framework for system reliability analysis.Critical failures,including External Leakage(ELU),Failure to Start(FTS),and Overheating(OHE),were identified as key risks.By incorporating redundancy into high-risk components such as pumps and batteries,the likelihood of these failures was significantly reduced.For instance,redundant pumps reduced the probability of ELU by 31.88%,while additional batteries decreased the occurrence of FTS by 36.45%.The results underscore the practical benefits of combining FTA and BN for enhancing system reliability,particularly in maritime applications where operational safety and efficiency are critical.This research provides valuable insights for maintenance planning and highlights the importance of redundancy in critical systems,especially as the industry transitions toward more autonomous vessels.
基金supported by Key Laboratory of Cyberspace Security,Ministry of Education,China。
文摘Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC.
基金supported by the research start-up funds for invited doctor of Lanzhou University of Technology under Grant 14/062402。
文摘In the context of modern software development characterized by increasing complexity and compressed development cycles,traditional static vulnerability detection methods face prominent challenges including high false positive rates and missed detections of complex logic due to their over-reliance on rule templates.This paper proposes a Syntax-Aware Hierarchical Attention Network(SAHAN)model,which achieves high-precision vulnerability detection through grammar-rule-driven multi-granularity code slicing and hierarchical semantic fusion mechanisms.The SAHAN model first generates Syntax Independent Units(SIUs),which slices the code based on Abstract Syntax Tree(AST)and predefined grammar rules,retaining vulnerability-sensitive contexts.Following this,through a hierarchical attention mechanism,the local syntax-aware layer encodes fine-grained patterns within SIUs,while the global semantic correlation layer captures vulnerability chains across SIUs,achieving synergistic modeling of syntax and semantics.Experiments show that on benchmark datasets like QEMU,SAHAN significantly improves detection performance by 4.8%to 13.1%on average compared to baseline models such as Devign and VulDeePecker.
文摘This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman coding and use it to compute the a priori source information which can be used when the channel environment is bad. The suggested scheme does not require changes on the transmitter side. Compared with separate decoding systems, the gain in signal to noise ratio is about 0 5-1.0 dB with a limi...
文摘This paper presents an efficient quadtree based fractal image coding scheme in wavelet transform domain based on the wavelet based theory of fractal image compression introduced by Davis. In the scheme, zerotrees of wavelet coefficients are used to reduce the number of domain blocks, which leads to lower bit cost required to represent the location information of fractal coding, and overall entropy constrained optimization is performed for the decision trees as well as for the sets of scalar quantizers and self quantizers of wavelet subtrees. Experiment results show that at the low bit rates, the proposed scheme gives about 1 dB improvement in PSNR over the reported results.
基金supported by the National Grand Fundamental Research of China (973 Program) under Grant No. 2011CB302601the National High Technology Research and Development of China (863 Program) under GrantNo. 2013AA01A213+2 种基金the National Natural Science Foundation of China under Grant No. 60873215the Natural Science Foundation for Distinguished Young Scholars of Hunan Province under Grant No. S2010J5050Specialized Research Fund for the Doctoral Program of Higher Education under Grant No. 20124307110015
文摘To reduce the time required to complete the regeneration process of erasure codes, we propose a Tree-structured Parallel Regeneration (TPR) scheme for multiple data losses in distributed storage systems. Under the scheme, two algorithms are proposed for the construction of multiple regeneration trees, namely the edge-disjoint algorithm and edge-sharing algorithm. The edge-disjoint algorithm constructs multiple independent trees, and is simple and appropriate for environments where newcomers and their providers are distributed over a large area and have few intersections. The edge-sharing algorithm constructs multiple trees that compete to utilize the bandwidth, and make a better utilization of the bandwidth, although it needs to measure the available band-width and deal with the bandwidth changes; it is therefore difficult to implement in practical systems. The parallel regeneration for multiple data losses of TPR primarily includes two optimizations: firstly, transferring the data through the bandwidth optimized-paths in a pipe-line manner; secondly, executing data regeneration over multiple trees in parallel. To evaluate the proposal, we implement an event-based simulator and make a detailed comparison with some popular regeneration methods. The quantitative comparison results show that the use of TPR employing either the edge-disjoint algorithm or edge-sharing algorithm reduces the regeneration time significantly.
基金The National Key Technology R&D Program of China during the 12th Five-Year Plan Period(No.2012BAH15B00)
文摘Quasi-cyclic low-density parity-check (QC-LDPC) codes can be constructed conveniently by cyclic lifting of protographs. For the purpose of eliminating short cycles in the Tanner graph to guarantee performance, first an algorithm to enumerate the harmful short cycles in the protograph is designed, and then a greedy algorithm is proposed to assign proper permutation shifts to the circulant permutation submatrices in the parity check matrix after lifting. Compared with the existing deterministic edge swapping (DES) algorithms, the proposed greedy algorithm adds more constraints in the assignment of permutation shifts to improve performance. Simulation results verify that it outperforms DES in reducing short cycles. In addition, it is proved that the parity check matrices of the cyclic lifted QC-LDPC codes can be transformed into block lower triangular ones when the lifting factor is a power of 2. Utilizing this property, the QC- LDPC codes can be encoded by preprocessing the base matrices, which reduces the encoding complexity to a large extent.
文摘The enhanced variable rate codec (EVRC) is a standard for the 'Speech ServiceOption 3 for Wideband Spread Spectrum Digital System,' which has been employed in both IS-95cellular systems and ANSI J-STC-008 PCS (personal communications systems). This paper concentrateson channel decoders that exploit the residual redundancy inherent in the enhanced variable ratecodec bitstream. This residual redundancy is quantified by modeling the parameters as first orderMarkov chains and computing the entropy rate based on the relative frequencies of transitions.Moreover, this residual redundancy can be exploited by an appropriately 'tuned' channel decoder toprovide substantial coding gain when compared with the decoders that do not exploit it. Channelcoding schemes include convolutional codes, and iteratively decoded parallel concatenatedconvolutional 'turbo' codes.
基金This work was supported in part by National Natural Science Foundation of China(No.61671324)the Director’s Funding from Pilot National Laboratory for Marine Science and Technology(Qingdao)(QNLM201712).
文摘Low-density parity-check(LDPC)codes are widely used due to their significant errorcorrection capability and linear decoding complexity.However,it is not sufficient for LDPC codes to satisfy the ultra low bit error rate(BER)requirement of next-generation ultra-high-speed communications due to the error floor phenomenon.According to the residual error characteristics of LDPC codes,we consider using the high rate Reed-Solomon(RS)codes as the outer codes to construct LDPC-RS product codes to eliminate the error floor and propose the hybrid error-erasure-correction decoding algorithm for the outer code to exploit erasure-correction capability effectively.Furthermore,the overall performance of product codes is improved using iteration between outer and inner codes.Simulation results validate that BER of the product code with the proposed hybrid algorithm is lower than that of the product code with no erasure correction.Compared with other product codes using LDPC codes,the proposed LDPC-RS product code with the same code rate has much better performance and smaller rate loss attributed to the maximum distance separable(MDS)property and significant erasure-correction capability of RS codes.