Circumferentially non-uniform tip clearances induced by rotor eccentricity significantly affect the overall performance of axial compressors,particularly the stability margin.Currently,Computational Fluid Dynamics(CFD...Circumferentially non-uniform tip clearances induced by rotor eccentricity significantly affect the overall performance of axial compressors,particularly the stability margin.Currently,Computational Fluid Dynamics(CFD)plays a crucial role in the aerodynamic analysis of eccentric compressors.However,conventional full-annulus Unsteady Reynolds-Averaged Navier-Stokes(URANS)simulations are prohibitively expensive for routine design and analysis purposes.To address this issue,the paper presents a novel Fourier-based method,called the Time-Space Collocation(TSC)method,for efficient simulations of eccentric compressors.This method coherently treats temporal and spatial harmonics,making it well-suited to tackle the rotor eccentricity problem,as the perturbation waves induced by eccentricity are time-periodic with respect to the rotor and space-periodic with respect to the stator.Three numerical cases,including NASA Rotor 67,original Stage 67,and Stage 67 with a reduced rotor-stator axial gap,were conducted to verify the effectiveness of the TSC method.The results indicate that,for the rotor eccentricity levels studied in this paper,the influence of weak rotor-stator interactions can be disregarded in the original Stage 67.In this situation,applying three harmonics can accurately capture both the performance variations and the non-uniformly distributed flowfields of eccentric compressors,while achieving a reduction in run time by two orders of magnitude compared to full-annulus URANS simulations.However,in Stage 67 with a reduced rotor-stator axial gap,the results that include rotor-stator interactions align much more closely with the URANS results.Nevertheless,the TSC simulations can still achieve speed-ups of several dozen times.Overall,the TSC method shows promising potential for application within the engineering community.展开更多
Indian Railways have been the largest people moving transport infrastructure in India.Over the years the systems and trains have been upgraded resulting in both better passenger amenities and reduction in travel time....Indian Railways have been the largest people moving transport infrastructure in India.Over the years the systems and trains have been upgraded resulting in both better passenger amenities and reduction in travel time.The newest addition is the Vande Bharat Express,a semi-high-speed train that was introduced in India in 2019.The train currently runs between 10 routes and has brought significant changes to India’s railway network.This article explores the introduction of Vande Bharat Express trains in India and its effects on the country’s interstation time-space shrinkage using cartographic techniques.The cartographic techniques like stepwise multidimensional scaling and interpolation using the distance cartogram plugin in QGIS are mainly used for generating the time-space maps for various speeds.The limitations of these techniques and the methods to overcome those limitations are also explored in this article.展开更多
This paper is devoted to investigating the spreading speed of a time-space periodic epidemic model with vital dynamics and standard incidence in discrete media. We establish the existence of the leftward and rightward...This paper is devoted to investigating the spreading speed of a time-space periodic epidemic model with vital dynamics and standard incidence in discrete media. We establish the existence of the leftward and rightward spreading speeds for the infective individuals, which can be used to estimate how fast the disease spreads. To overcome the difficulty arising from the lack of comparison principle for such time-space periodic nonmonotone systems, our proof is mainly based on constructing a series of scalar time-space periodic equations, establishing the spreading speeds for such auxiliary equations and using comparison methods. It may be the first work to study the spreading speed for time-space periodic non-monotone systems.展开更多
Seismic anisotropy has been extensively acknowledged as a crucial element that influences the wave propagation characteristic during wavefield simulation,inversion and imaging.Transversely isotropy(TI)and orthorhombic...Seismic anisotropy has been extensively acknowledged as a crucial element that influences the wave propagation characteristic during wavefield simulation,inversion and imaging.Transversely isotropy(TI)and orthorhombic anisotropy(OA)are two typical categories of anisotropic media in exploration geophysics.In comparison of the elastic wave equations in both TI and OA media,pseudo-acoustic wave equations(PWEs)based on the acoustic assumption can markedly reduce computational cost and complexity.However,the presently available PWEs may experience SV-wave contamination and instability when anisotropic parameters cannot satisfy the approximated condition.Exploiting pure-mode wave equations can effectively resolve the above-mentioned issues and generate pure P-wave events without any artifacts.To further improve the computational accuracy and efficiency,we develop two novel pure qP-wave equations(PPEs)and illustrate the corresponding numerical solutions in the timespace domain for 3D tilted TI(TTI)and tilted OA(TOA)media.First,the rational polynomials are adopted to estimate the exact pure qP-wave dispersion relations,which contain complicated pseudo-differential operators with irrational forms.The polynomial coefficients are produced by applying a linear optimization algorithm to minimize the objective function difference between the expansion formula and the exact one.Then,the developed optimized PPEs are efficiently implemented using the finite-difference(FD)method in the time-space domain by introducing a scalar operator,which can help avoid the problem of spectral-based algorithms and other calculation burdens.Structures of the new equations are concise and corresponding implementation processes are straightforward.Phase velocity analyses indicate that our proposed optimized equations can lead to reliable approximation results.3D synthetic examples demonstrate that our proposed FD-based PPEs can produce accurate and stable P-wave responses,and effectively describe the wavefield features in complicated TTI and TOA media.展开更多
Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbule...Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.展开更多
介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及...介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及工程意义进行了简单的介绍,以某工程设计项目中的实际结构为例,介绍了该方法的使用过程及注意事项,并结合压力容器工程设计领域的实际需求,对我国标准体系下一步的制定或修订方向提出了展望。展开更多
Quantum error correction is a technique that enhances a system’s ability to combat noise by encoding logical information into additional quantum bits,which plays a key role in building practical quantum computers.The...Quantum error correction is a technique that enhances a system’s ability to combat noise by encoding logical information into additional quantum bits,which plays a key role in building practical quantum computers.The XZZX surface code,with only one stabilizer generator on each face,demonstrates significant application potential under biased noise.However,the existing minimum weight perfect matching(MWPM)algorithm has high computational complexity and lacks flexibility in large-scale systems.Therefore,this paper proposes a decoding method that combines graph neural networks(GNN)with multi-classifiers,the syndrome is transformed into an undirected graph,and the features are aggregated by convolutional layers,providing a more efficient and accurate decoding strategy.In the experiments,we evaluated the performance of the XZZX code under different biased noise conditions(bias=1,20,200)and different code distances(d=3,5,7,9,11).The experimental results show that under low bias noise(bias=1),the GNN decoder achieves a threshold of 0.18386,an improvement of approximately 19.12%compared to the MWPM decoder.Under high bias noise(bias=200),the GNN decoder reaches a threshold of 0.40542,improving by approximately 20.76%,overcoming the limitations of the conventional decoder.They demonstrate that the GNN decoding method exhibits superior performance and has broad application potential in the error correction of XZZX code.展开更多
Constituted by BCH component codes and its ordered statistics decoding(OSD),the successive cancellation list(SCL)decoding of U-UV structural codes can provide competent error-correction performance in the short-to-med...Constituted by BCH component codes and its ordered statistics decoding(OSD),the successive cancellation list(SCL)decoding of U-UV structural codes can provide competent error-correction performance in the short-to-medium length regime.However,this list decoding complexity becomes formidable as the decoding output list size increases.This is primarily incurred by the OSD.Addressing this challenge,this paper proposes the low complexity SCL decoding through reducing the complexity of component code decoding,and pruning the redundant SCL decoding paths.For the former,an efficient skipping rule is introduced for the OSD so that the higher order decoding can be skipped when they are not possible to provide a more likely codeword candidate.It is further extended to the OSD variant,the box-andmatch algorithm(BMA),in facilitating the component code decoding.Moreover,through estimating the correlation distance lower bounds(CDLBs)of the component code decoding outputs,a path pruning(PP)-SCL decoding is proposed to further facilitate the decoding of U-UV codes.In particular,its integration with the improved OSD and BMA is discussed.Simulation results show that significant complexity reduction can be achieved.Consequently,the U-UV codes can outperform the cyclic redundancy check(CRC)-polar codes with a similar decoding complexity.展开更多
Space laser communication(SLC)is an emerging technology to support high-throughput data transmissions in space networks.In this paper,to guarantee the reliability of high-speed SLC links,we aim at practical implementa...Space laser communication(SLC)is an emerging technology to support high-throughput data transmissions in space networks.In this paper,to guarantee the reliability of high-speed SLC links,we aim at practical implementation of low-density paritycheck(LDPC)decoding under resource-restricted space platforms.Particularly,due to the supply restriction and cost issues of high-speed on-board devices such as analog-to-digital converters(ADCs),the input of LDPC decoding will be usually constrained by hard-decision channel output.To tackle this challenge,density-evolution-based theoretical analysis is firstly performed to identify the cause of performance degradation in the conventional binaryinitialized iterative decoding(BIID)algorithm.Then,a computation-efficient decoding algorithm named multiary-initialized iterative decoding with early termination(MIID-ET)is proposed,which improves the error-correcting performance and computation efficiency by using a reliability-based initialization method and a threshold-based decoding termination rule.Finally,numerical simulations are conducted on example codes of rates 7/8 and 1/2 to evaluate the performance of different LDPC decoding algorithms,where the proposed MIID-ET outperforms the BIID with a coding gain of 0.38 dB and variable node calculation saving of 37%.With this advantage,the proposed MIID-ET can notably reduce LDPC decoder’s hardware implementation complexity under the same bit error rate performance,which successfully doubles the total throughput to 10 Gbps on a single-chip FPGA.展开更多
To improve the decoding performance of quantum error-correcting codes in asymmetric noise channels,a neural network-based decoding algorithm for bias-tailored quantum codes is proposed.The algorithm consists of a bias...To improve the decoding performance of quantum error-correcting codes in asymmetric noise channels,a neural network-based decoding algorithm for bias-tailored quantum codes is proposed.The algorithm consists of a biased noise model,a neural belief propagation decoder,a convolutional optimization layer,and a multi-objective loss function.The biased noise model simulates asymmetric error generation,providing a training dataset for decoding.The neural network,leveraging dynamic weight learning and a multi-objective loss function,mitigates error degeneracy.Additionally,the convolutional optimization layer enhances early-stage convergence efficiency.Numerical results show that for bias-tailored quantum codes,our decoder performs much better than the belief propagation(BP)with ordered statistics decoding(BP+OSD).Our decoder achieves an order of magnitude improvement in the error suppression compared to higher-order BP+OSD.Furthermore,the decoding threshold of our decoder for surface codes reaches a high threshold of 20%.展开更多
Mobile communications are reaching out to every aspect of our daily life,necessitating highefficiency data transmission and support for diverse data types and communication scenarios.Polar codes have emerged as a prom...Mobile communications are reaching out to every aspect of our daily life,necessitating highefficiency data transmission and support for diverse data types and communication scenarios.Polar codes have emerged as a promising solution due to their outstanding error-correction performance and low complexity.Unequal error protection(UEP)involves nonuniform error safeguarding for distinct data segments,achieving a fine balance between error resilience and resource allocation,which ultimately enhancing system performance and efficiency.In this paper,we propose a novel class of UEP rateless polar codes.The codes are designed based on matrix extension of polar codes,and elegant mapping and duplication operations are designed to achieve UEP property while preserving the overall performance of conventional polar codes.Superior UEP performance is attained without significant modifications to conventional polar codes,making it straightforward for compatibility with existing polar codes.A theoretical analysis is conducted on the block error rate and throughput efficiency performance.To the best of our knowledge,this work provides the first theoretical performance analysis of UEP rateless polar codes.Simulation results show that the proposed codes significantly outperform existing polar coding schemes in both block error rate and throughput efficiency.展开更多
Compact size,high brightness,and wide field of view(FOV)are key requirements for long-wave infrared imagers used in military surveillance or night navigation.However,to meet the imaging requirements of high resolution...Compact size,high brightness,and wide field of view(FOV)are key requirements for long-wave infrared imagers used in military surveillance or night navigation.However,to meet the imaging requirements of high resolution and wide FOV,infrared optical systems often adopt complex optical lens groups,which will increase the size and weight of the optical system.In this paper,a strategy based on wavefront coding(WFC)is proposed to design a compact wide-FOV infrared imager.A cubic phase mask is inserted into the pupil plane of the infrared imager to correct the aberration.The simulated results show that,the WFC infrared imager has good imaging quality in a wide FOV of±16°.In addition,the WFC infrared imager achieves compactness with its 40 mm×40 mm×40 mm size.A fast focal ratio of 1 combined with an entrance pupil diameter of 25 mm ensures brightness.This work is of significance for designing a compact wide-FOV infrared imager.展开更多
Differential pulse-position modulation(DP PM)can achieve a good compromise between power and bandwidth requirements.However,the output sequence has undetectable insertions and deletions.This paper proposes a successiv...Differential pulse-position modulation(DP PM)can achieve a good compromise between power and bandwidth requirements.However,the output sequence has undetectable insertions and deletions.This paper proposes a successive cancellation(SC)decoding scheme based on the weighted levenshtein distance(WLD)of polar codes for correcting insertions/deletions in DPPM systems.In this method,the WLD is used to calculate the transfer probabilities recursively to obtain likelihood ratios,and the low-complexity SC decoding method is built according to the error characteristics to match the DPPM system.Additionally,the proposed SC decoding scheme is extended to list decoding,which can further improve error correction performance.Simulation results show that the proposed scheme can effectively correct insertions/deletions in the DPPM system,which enhances its reliability and performance.展开更多
As artificial Intelligence(AI)continues to expand exponentially,particularly with the emergence of generative pre-trained transformers(GPT)based on a transformer’s architecture,which has revolutionized data processin...As artificial Intelligence(AI)continues to expand exponentially,particularly with the emergence of generative pre-trained transformers(GPT)based on a transformer’s architecture,which has revolutionized data processing and enabled significant improvements in various applications.This document seeks to investigate the security vulnerabilities detection in the source code using a range of large language models(LLM).Our primary objective is to evaluate the effectiveness of Static Application Security Testing(SAST)by applying various techniques such as prompt persona,structure outputs and zero-shot.To the selection of the LLMs(CodeLlama 7B,DeepSeek coder 7B,Gemini 1.5 Flash,Gemini 2.0 Flash,Mistral 7b Instruct,Phi 38b Mini 128K instruct,Qwen 2.5 coder,StartCoder 27B)with comparison and combination with Find Security Bugs.The evaluation method will involve using a selected dataset containing vulnerabilities,and the results to provide insights for different scenarios according to the software criticality(Business critical,non-critical,minimum effort,best effort)In detail,the main objectives of this study are to investigate if large language models outperform or exceed the capabilities of traditional static analysis tools,if the combining LLMs with Static Application Security Testing(SAST)tools lead to an improvement and the possibility that local machine learning models on a normal computer produce reliable results.Summarizing the most important conclusions of the research,it can be said that while it is true that the results have improved depending on the size of the LLM for business-critical software,the best results have been obtained by SAST analysis.This differs in“NonCritical,”“Best Effort,”and“Minimum Effort”scenarios,where the combination of LLM(Gemini)+SAST has obtained better results.展开更多
Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Althoug...Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC.展开更多
The ultracold neutron(UCN)transport code,MCUCN,designed initially for simulating UCN transportation from a solid deuterium(SD_2)source and neutron electric dipole moment experiments,could not simulate UCN storage and ...The ultracold neutron(UCN)transport code,MCUCN,designed initially for simulating UCN transportation from a solid deuterium(SD_2)source and neutron electric dipole moment experiments,could not simulate UCN storage and transportation in a superfluid^(4)He(SFHe,He-Ⅱ)source accurately.This limitation arose from the absence of an^(4)He upscattering mechanism and the absorption of^(3)He.And the provided source energy distribution in MCUCN is different from that in SFHe source.This study introduced enhancements to MCUCN to address these constraints,explicitly incorporating the^(4)He upscattering effect,the absorption of^(3)He,the loss caused by impurities on converter wall,UCN source energy distribution in SFHe,and the transmission through negative optical potential.Additionally,a Python-based visualization code for intermediate states and results was developed.To validate these enhancements,we systematically compared the simulation results of the Lujan Center Mark3 UCN system by MCUCN and the improved MCUCN code(iMCUCN)with UCNtransport simulations.Additionally,we compared the results of the SUN1 system simulated by MCUCN and iMCUCN with measurement results.The study demonstrates that iMCUCN effectively simulates the storage and transportation of ultracold neutrons in He-Ⅱ.展开更多
Blind recognition of low-density paritycheck(LDPC)codes has gradually attracted more attention with the development of military and civil communications.However,in the case of the paritycheck matrices with relatively ...Blind recognition of low-density paritycheck(LDPC)codes has gradually attracted more attention with the development of military and civil communications.However,in the case of the paritycheck matrices with relatively high row weights,the existing blind recognition algorithms based on a candidate set generally perform worse.In this paper,we propose a blind recognition method for LDPC codes,called as tangent function assisted least square(TLS)method,which improves recognition performances by constructing a new cost function.To characterize the constraint degree among received vectors and paritycheck vectors,a feature function based on tangent function is constructed in the proposed algorithm.A cost function based on least square method is also established according to the feature function values satisfying the parity-check relationship.Moreover,the minimum average value in TLS is obtained on the candidate set.Numerical analysis and simulation results show that recognition performances of TLS algorithm are consistent with theoretical results.Compared with existing algorithms,the proposed method possesses better recognition performances.展开更多
LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora...LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora,which introduces significant bias in performance evaluation;and(2)mainstream metrics focus primarily on surface-level accuracy,failing to uncover the underlying factors that constrain model capabilities.To address these issues,this paper presents TCode(Translation-Oriented Code Evaluation benchmark)—a complexity-controllable,contamination-free benchmark dataset for code translation—alongside a dedicated static feature sensitivity evaluation framework.The dataset is carefully designed to control complexity along multiple dimensions—including syntactic nesting and expression intricacy—enabling both broad coverage and fine-grained differentiation of sample difficulty.This design supports precise evaluation of model capabilities across a wide spectrum of translation challenges.The proposed evaluation framework introduces a correlation-driven analysis mechanism based on static program features,enabling predictive modeling of translation success from two perspectives:Code Form Complexity(e.g.,code length and character density)and Semantic Modeling Complexity(e.g.,syntactic depth,control-flow nesting,and type system complexity).Empirical evaluations across representative LLMs—including Qwen2.5-72B and Llama3.3-70B—demonstrate that even state-of-the-art models achieve over 80% compilation success on simple samples,but their accuracy drops sharply below 40% on complex cases.Further correlation analysis indicates that Semantic Modeling Complexity alone is correlated with up to 60% of the variance in translation success,with static program features exhibiting nonlinear threshold effects that highlight clear capability boundaries.This study departs fromthe traditional accuracy-centric evaluation paradigm and,for the first time,systematically characterizes the capabilities of large languagemodels in translation tasks through the lens of programstatic features.The findings provide actionable insights for model refinement and training strategy development.展开更多
文摘Circumferentially non-uniform tip clearances induced by rotor eccentricity significantly affect the overall performance of axial compressors,particularly the stability margin.Currently,Computational Fluid Dynamics(CFD)plays a crucial role in the aerodynamic analysis of eccentric compressors.However,conventional full-annulus Unsteady Reynolds-Averaged Navier-Stokes(URANS)simulations are prohibitively expensive for routine design and analysis purposes.To address this issue,the paper presents a novel Fourier-based method,called the Time-Space Collocation(TSC)method,for efficient simulations of eccentric compressors.This method coherently treats temporal and spatial harmonics,making it well-suited to tackle the rotor eccentricity problem,as the perturbation waves induced by eccentricity are time-periodic with respect to the rotor and space-periodic with respect to the stator.Three numerical cases,including NASA Rotor 67,original Stage 67,and Stage 67 with a reduced rotor-stator axial gap,were conducted to verify the effectiveness of the TSC method.The results indicate that,for the rotor eccentricity levels studied in this paper,the influence of weak rotor-stator interactions can be disregarded in the original Stage 67.In this situation,applying three harmonics can accurately capture both the performance variations and the non-uniformly distributed flowfields of eccentric compressors,while achieving a reduction in run time by two orders of magnitude compared to full-annulus URANS simulations.However,in Stage 67 with a reduced rotor-stator axial gap,the results that include rotor-stator interactions align much more closely with the URANS results.Nevertheless,the TSC simulations can still achieve speed-ups of several dozen times.Overall,the TSC method shows promising potential for application within the engineering community.
文摘Indian Railways have been the largest people moving transport infrastructure in India.Over the years the systems and trains have been upgraded resulting in both better passenger amenities and reduction in travel time.The newest addition is the Vande Bharat Express,a semi-high-speed train that was introduced in India in 2019.The train currently runs between 10 routes and has brought significant changes to India’s railway network.This article explores the introduction of Vande Bharat Express trains in India and its effects on the country’s interstation time-space shrinkage using cartographic techniques.The cartographic techniques like stepwise multidimensional scaling and interpolation using the distance cartogram plugin in QGIS are mainly used for generating the time-space maps for various speeds.The limitations of these techniques and the methods to overcome those limitations are also explored in this article.
基金supported by the Natural Science Basic Research Program of Shanxi(Grant No.2024JC-YBMS-025)the Innovation Capability Support Program of Shanxi(Grant No.2024RS-CXTD-88)。
文摘This paper is devoted to investigating the spreading speed of a time-space periodic epidemic model with vital dynamics and standard incidence in discrete media. We establish the existence of the leftward and rightward spreading speeds for the infective individuals, which can be used to estimate how fast the disease spreads. To overcome the difficulty arising from the lack of comparison principle for such time-space periodic nonmonotone systems, our proof is mainly based on constructing a series of scalar time-space periodic equations, establishing the spreading speeds for such auxiliary equations and using comparison methods. It may be the first work to study the spreading speed for time-space periodic non-monotone systems.
基金supported by the National Key R&D Program of China(2021YFA0716902)National Natural Science Foundation of China(NSFC)under contract number 42374149 and 42004119National Science and Technology Major Project(2024ZD1002907)。
文摘Seismic anisotropy has been extensively acknowledged as a crucial element that influences the wave propagation characteristic during wavefield simulation,inversion and imaging.Transversely isotropy(TI)and orthorhombic anisotropy(OA)are two typical categories of anisotropic media in exploration geophysics.In comparison of the elastic wave equations in both TI and OA media,pseudo-acoustic wave equations(PWEs)based on the acoustic assumption can markedly reduce computational cost and complexity.However,the presently available PWEs may experience SV-wave contamination and instability when anisotropic parameters cannot satisfy the approximated condition.Exploiting pure-mode wave equations can effectively resolve the above-mentioned issues and generate pure P-wave events without any artifacts.To further improve the computational accuracy and efficiency,we develop two novel pure qP-wave equations(PPEs)and illustrate the corresponding numerical solutions in the timespace domain for 3D tilted TI(TTI)and tilted OA(TOA)media.First,the rational polynomials are adopted to estimate the exact pure qP-wave dispersion relations,which contain complicated pseudo-differential operators with irrational forms.The polynomial coefficients are produced by applying a linear optimization algorithm to minimize the objective function difference between the expansion formula and the exact one.Then,the developed optimized PPEs are efficiently implemented using the finite-difference(FD)method in the time-space domain by introducing a scalar operator,which can help avoid the problem of spectral-based algorithms and other calculation burdens.Structures of the new equations are concise and corresponding implementation processes are straightforward.Phase velocity analyses indicate that our proposed optimized equations can lead to reliable approximation results.3D synthetic examples demonstrate that our proposed FD-based PPEs can produce accurate and stable P-wave responses,and effectively describe the wavefield features in complicated TTI and TOA media.
基金supported by the National Natural Science Foundation of China(No.12104141).
文摘Aiming at the problem that the bit error rate(BER)of asymmetrically clipped optical orthogonal frequency division multiplexing(ACO-OFDM)space optical communication system is significantly affected by different turbulence intensities,the deep learning technique is proposed to the polarization code decoding in ACO-OFDM space optical communication system.Moreover,this system realizes the polarization code decoding and signal demodulation without frequency conduction with superior performance and robustness compared with the performance of traditional decoder.Simulations under different turbulence intensities as well as different mapping orders show that the convolutional neural network(CNN)decoder trained under weak-medium-strong turbulence atmospheric channels achieves a performance improvement of about 10^(2)compared to the conventional decoder at 4-quadrature amplitude modulation(4QAM),and the BERs for both 16QAM and 64QAM are in between those of the conventional decoder.
文摘介绍了高温蠕变工况下运行的压力容器可能出现的失效模式,结合工程设计现状,指出了我国当前压力容器标准体系在确定高温蠕变工况许用压应力时存在的技术瓶颈,在此基础之上引出ASME Code Case 3029,对其适用范围、发展历程、产生背景及工程意义进行了简单的介绍,以某工程设计项目中的实际结构为例,介绍了该方法的使用过程及注意事项,并结合压力容器工程设计领域的实际需求,对我国标准体系下一步的制定或修订方向提出了展望。
基金supported by the Natural Science Foundation of Shandong Province,China(Grant No.ZR2021MF049)the Joint Fund of Natural Science Foundation of Shandong Province,China(Grant Nos.ZR2022LL.Z012 and ZR2021LLZ001)the Key Research and Development Program of Shandong Province,China(Grant No.2023CXGC010901).
文摘Quantum error correction is a technique that enhances a system’s ability to combat noise by encoding logical information into additional quantum bits,which plays a key role in building practical quantum computers.The XZZX surface code,with only one stabilizer generator on each face,demonstrates significant application potential under biased noise.However,the existing minimum weight perfect matching(MWPM)algorithm has high computational complexity and lacks flexibility in large-scale systems.Therefore,this paper proposes a decoding method that combines graph neural networks(GNN)with multi-classifiers,the syndrome is transformed into an undirected graph,and the features are aggregated by convolutional layers,providing a more efficient and accurate decoding strategy.In the experiments,we evaluated the performance of the XZZX code under different biased noise conditions(bias=1,20,200)and different code distances(d=3,5,7,9,11).The experimental results show that under low bias noise(bias=1),the GNN decoder achieves a threshold of 0.18386,an improvement of approximately 19.12%compared to the MWPM decoder.Under high bias noise(bias=200),the GNN decoder reaches a threshold of 0.40542,improving by approximately 20.76%,overcoming the limitations of the conventional decoder.They demonstrate that the GNN decoding method exhibits superior performance and has broad application potential in the error correction of XZZX code.
基金supported by the National Natural Science Foundation of China(NSFC)with project ID 62071498the Guangdong National Science Foundation(GDNSF)with project ID 2024A1515010213.
文摘Constituted by BCH component codes and its ordered statistics decoding(OSD),the successive cancellation list(SCL)decoding of U-UV structural codes can provide competent error-correction performance in the short-to-medium length regime.However,this list decoding complexity becomes formidable as the decoding output list size increases.This is primarily incurred by the OSD.Addressing this challenge,this paper proposes the low complexity SCL decoding through reducing the complexity of component code decoding,and pruning the redundant SCL decoding paths.For the former,an efficient skipping rule is introduced for the OSD so that the higher order decoding can be skipped when they are not possible to provide a more likely codeword candidate.It is further extended to the OSD variant,the box-andmatch algorithm(BMA),in facilitating the component code decoding.Moreover,through estimating the correlation distance lower bounds(CDLBs)of the component code decoding outputs,a path pruning(PP)-SCL decoding is proposed to further facilitate the decoding of U-UV codes.In particular,its integration with the improved OSD and BMA is discussed.Simulation results show that significant complexity reduction can be achieved.Consequently,the U-UV codes can outperform the cyclic redundancy check(CRC)-polar codes with a similar decoding complexity.
基金supported by the National Key R&D Program of China(Grant No.2022YFA1005000)the National Natural Science Foundation of China(Grant No.62101308 and 62025110).
文摘Space laser communication(SLC)is an emerging technology to support high-throughput data transmissions in space networks.In this paper,to guarantee the reliability of high-speed SLC links,we aim at practical implementation of low-density paritycheck(LDPC)decoding under resource-restricted space platforms.Particularly,due to the supply restriction and cost issues of high-speed on-board devices such as analog-to-digital converters(ADCs),the input of LDPC decoding will be usually constrained by hard-decision channel output.To tackle this challenge,density-evolution-based theoretical analysis is firstly performed to identify the cause of performance degradation in the conventional binaryinitialized iterative decoding(BIID)algorithm.Then,a computation-efficient decoding algorithm named multiary-initialized iterative decoding with early termination(MIID-ET)is proposed,which improves the error-correcting performance and computation efficiency by using a reliability-based initialization method and a threshold-based decoding termination rule.Finally,numerical simulations are conducted on example codes of rates 7/8 and 1/2 to evaluate the performance of different LDPC decoding algorithms,where the proposed MIID-ET outperforms the BIID with a coding gain of 0.38 dB and variable node calculation saving of 37%.With this advantage,the proposed MIID-ET can notably reduce LDPC decoder’s hardware implementation complexity under the same bit error rate performance,which successfully doubles the total throughput to 10 Gbps on a single-chip FPGA.
基金supported by the National Natural Science Foundation of China(Grant Nos.62371240,61802175,62401266,and 12201300)the National Key R&D Program of China(Grant No.2022YFB3103800)+2 种基金the Natural Science Foundation of Jiangsu Province(Grant No.BK20241452)the Fundamental Research Funds for the Central Universities(Grant No.30923011014)the fund of Laboratory for Advanced Computing and Intelligence Engineering(Grant No.2023-LYJJ-01-009)。
文摘To improve the decoding performance of quantum error-correcting codes in asymmetric noise channels,a neural network-based decoding algorithm for bias-tailored quantum codes is proposed.The algorithm consists of a biased noise model,a neural belief propagation decoder,a convolutional optimization layer,and a multi-objective loss function.The biased noise model simulates asymmetric error generation,providing a training dataset for decoding.The neural network,leveraging dynamic weight learning and a multi-objective loss function,mitigates error degeneracy.Additionally,the convolutional optimization layer enhances early-stage convergence efficiency.Numerical results show that for bias-tailored quantum codes,our decoder performs much better than the belief propagation(BP)with ordered statistics decoding(BP+OSD).Our decoder achieves an order of magnitude improvement in the error suppression compared to higher-order BP+OSD.Furthermore,the decoding threshold of our decoder for surface codes reaches a high threshold of 20%.
基金supported by National Natural Science Foundation of China(No.62301008)China Postdoctoral Science Foundation(No.2022M720272)New Cornerstone Science Foundation through the XPLORER PRIZE。
文摘Mobile communications are reaching out to every aspect of our daily life,necessitating highefficiency data transmission and support for diverse data types and communication scenarios.Polar codes have emerged as a promising solution due to their outstanding error-correction performance and low complexity.Unequal error protection(UEP)involves nonuniform error safeguarding for distinct data segments,achieving a fine balance between error resilience and resource allocation,which ultimately enhancing system performance and efficiency.In this paper,we propose a novel class of UEP rateless polar codes.The codes are designed based on matrix extension of polar codes,and elegant mapping and duplication operations are designed to achieve UEP property while preserving the overall performance of conventional polar codes.Superior UEP performance is attained without significant modifications to conventional polar codes,making it straightforward for compatibility with existing polar codes.A theoretical analysis is conducted on the block error rate and throughput efficiency performance.To the best of our knowledge,this work provides the first theoretical performance analysis of UEP rateless polar codes.Simulation results show that the proposed codes significantly outperform existing polar coding schemes in both block error rate and throughput efficiency.
文摘Compact size,high brightness,and wide field of view(FOV)are key requirements for long-wave infrared imagers used in military surveillance or night navigation.However,to meet the imaging requirements of high resolution and wide FOV,infrared optical systems often adopt complex optical lens groups,which will increase the size and weight of the optical system.In this paper,a strategy based on wavefront coding(WFC)is proposed to design a compact wide-FOV infrared imager.A cubic phase mask is inserted into the pupil plane of the infrared imager to correct the aberration.The simulated results show that,the WFC infrared imager has good imaging quality in a wide FOV of±16°.In addition,the WFC infrared imager achieves compactness with its 40 mm×40 mm×40 mm size.A fast focal ratio of 1 combined with an entrance pupil diameter of 25 mm ensures brightness.This work is of significance for designing a compact wide-FOV infrared imager.
基金supported by National Natural Science Foundation of China(No.61801327).
文摘Differential pulse-position modulation(DP PM)can achieve a good compromise between power and bandwidth requirements.However,the output sequence has undetectable insertions and deletions.This paper proposes a successive cancellation(SC)decoding scheme based on the weighted levenshtein distance(WLD)of polar codes for correcting insertions/deletions in DPPM systems.In this method,the WLD is used to calculate the transfer probabilities recursively to obtain likelihood ratios,and the low-complexity SC decoding method is built according to the error characteristics to match the DPPM system.Additionally,the proposed SC decoding scheme is extended to list decoding,which can further improve error correction performance.Simulation results show that the proposed scheme can effectively correct insertions/deletions in the DPPM system,which enhances its reliability and performance.
文摘As artificial Intelligence(AI)continues to expand exponentially,particularly with the emergence of generative pre-trained transformers(GPT)based on a transformer’s architecture,which has revolutionized data processing and enabled significant improvements in various applications.This document seeks to investigate the security vulnerabilities detection in the source code using a range of large language models(LLM).Our primary objective is to evaluate the effectiveness of Static Application Security Testing(SAST)by applying various techniques such as prompt persona,structure outputs and zero-shot.To the selection of the LLMs(CodeLlama 7B,DeepSeek coder 7B,Gemini 1.5 Flash,Gemini 2.0 Flash,Mistral 7b Instruct,Phi 38b Mini 128K instruct,Qwen 2.5 coder,StartCoder 27B)with comparison and combination with Find Security Bugs.The evaluation method will involve using a selected dataset containing vulnerabilities,and the results to provide insights for different scenarios according to the software criticality(Business critical,non-critical,minimum effort,best effort)In detail,the main objectives of this study are to investigate if large language models outperform or exceed the capabilities of traditional static analysis tools,if the combining LLMs with Static Application Security Testing(SAST)tools lead to an improvement and the possibility that local machine learning models on a normal computer produce reliable results.Summarizing the most important conclusions of the research,it can be said that while it is true that the results have improved depending on the size of the LLM for business-critical software,the best results have been obtained by SAST analysis.This differs in“NonCritical,”“Best Effort,”and“Minimum Effort”scenarios,where the combination of LLM(Gemini)+SAST has obtained better results.
基金supported by Key Laboratory of Cyberspace Security,Ministry of Education,China。
文摘Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC.
基金the National Key R&D Program of China(No.2024YFE0110001)the National Natural Science Foundation of China(U1932219)the Mobility Programme endorsed by the Joint Committee of the Sino-German Center(M0728)。
文摘The ultracold neutron(UCN)transport code,MCUCN,designed initially for simulating UCN transportation from a solid deuterium(SD_2)source and neutron electric dipole moment experiments,could not simulate UCN storage and transportation in a superfluid^(4)He(SFHe,He-Ⅱ)source accurately.This limitation arose from the absence of an^(4)He upscattering mechanism and the absorption of^(3)He.And the provided source energy distribution in MCUCN is different from that in SFHe source.This study introduced enhancements to MCUCN to address these constraints,explicitly incorporating the^(4)He upscattering effect,the absorption of^(3)He,the loss caused by impurities on converter wall,UCN source energy distribution in SFHe,and the transmission through negative optical potential.Additionally,a Python-based visualization code for intermediate states and results was developed.To validate these enhancements,we systematically compared the simulation results of the Lujan Center Mark3 UCN system by MCUCN and the improved MCUCN code(iMCUCN)with UCNtransport simulations.Additionally,we compared the results of the SUN1 system simulated by MCUCN and iMCUCN with measurement results.The study demonstrates that iMCUCN effectively simulates the storage and transportation of ultracold neutrons in He-Ⅱ.
基金Fundamental Research Funds for the Central Universities under Grant 3072025YC0802the National Natural Science Foundation of China under Grant 62001138Heilongjiang Provincial Natural Science Foundation of China under Grant LH2021F009。
文摘Blind recognition of low-density paritycheck(LDPC)codes has gradually attracted more attention with the development of military and civil communications.However,in the case of the paritycheck matrices with relatively high row weights,the existing blind recognition algorithms based on a candidate set generally perform worse.In this paper,we propose a blind recognition method for LDPC codes,called as tangent function assisted least square(TLS)method,which improves recognition performances by constructing a new cost function.To characterize the constraint degree among received vectors and paritycheck vectors,a feature function based on tangent function is constructed in the proposed algorithm.A cost function based on least square method is also established according to the feature function values satisfying the parity-check relationship.Moreover,the minimum average value in TLS is obtained on the candidate set.Numerical analysis and simulation results show that recognition performances of TLS algorithm are consistent with theoretical results.Compared with existing algorithms,the proposed method possesses better recognition performances.
文摘LargeLanguageModels(LLMs)are increasingly appliedinthe fieldof code translation.However,existing evaluation methodologies suffer from two major limitations:(1)the high overlap between test data and pretraining corpora,which introduces significant bias in performance evaluation;and(2)mainstream metrics focus primarily on surface-level accuracy,failing to uncover the underlying factors that constrain model capabilities.To address these issues,this paper presents TCode(Translation-Oriented Code Evaluation benchmark)—a complexity-controllable,contamination-free benchmark dataset for code translation—alongside a dedicated static feature sensitivity evaluation framework.The dataset is carefully designed to control complexity along multiple dimensions—including syntactic nesting and expression intricacy—enabling both broad coverage and fine-grained differentiation of sample difficulty.This design supports precise evaluation of model capabilities across a wide spectrum of translation challenges.The proposed evaluation framework introduces a correlation-driven analysis mechanism based on static program features,enabling predictive modeling of translation success from two perspectives:Code Form Complexity(e.g.,code length and character density)and Semantic Modeling Complexity(e.g.,syntactic depth,control-flow nesting,and type system complexity).Empirical evaluations across representative LLMs—including Qwen2.5-72B and Llama3.3-70B—demonstrate that even state-of-the-art models achieve over 80% compilation success on simple samples,but their accuracy drops sharply below 40% on complex cases.Further correlation analysis indicates that Semantic Modeling Complexity alone is correlated with up to 60% of the variance in translation success,with static program features exhibiting nonlinear threshold effects that highlight clear capability boundaries.This study departs fromthe traditional accuracy-centric evaluation paradigm and,for the first time,systematically characterizes the capabilities of large languagemodels in translation tasks through the lens of programstatic features.The findings provide actionable insights for model refinement and training strategy development.