Monogenic binary coding (MBC) have been known to be effective for local feature extraction, while sparse or collaborative representation based classification (CRC) has shown interesting results in robust face reco...Monogenic binary coding (MBC) have been known to be effective for local feature extraction, while sparse or collaborative representation based classification (CRC) has shown interesting results in robust face recognition. In this paper, a novel face recognition algorithm of fusing MBC and CRC named M-CRC is proposed; in which the dimensionality problem is resolved by projection matrix. The proposed algorithm is evaluated on benchmark face databases, including AR, PolyU-NIR and CAS-PEAL. The results indicate a significant increase in the performance when compared with state-of-the-art face recognition methods.展开更多
Multi-firmware comparison techniques can improve efficiency when auditing firmwares in bulk.How-ever,the problem of matching functions between multiple firmwares has not been studied before.This paper proposes a multi...Multi-firmware comparison techniques can improve efficiency when auditing firmwares in bulk.How-ever,the problem of matching functions between multiple firmwares has not been studied before.This paper proposes a multi-firmware comparison method based on evolutionary algorithms and trusted base points.We first model the multi-firmware comparison as a multi-sequence matching problem.Then,we propose an adaptation function and a population generation method based on trusted base points.Finally,we apply an evolutionary algorithm to find the optimal result.At the same time,we design the similarity of matching results as an evaluation metric to measure the effect of multi-firmware comparison.The experiments show that the proposed method outperforms Bindiff and the string-based method.Precisely,the similarity between the matching results of the proposed method and Bindiff matching results is 61%,and the similarity between the matching results of the proposed method and the string-based method is 62.8%.By sampling and manual verification,the accuracy of the matching results of the proposed method can be about 66.4%.展开更多
Most solutions for detecting buffer overflow are based on source code. But the requirement tor source code is not always practical especially for business software. A new approach was presented to detect statically th...Most solutions for detecting buffer overflow are based on source code. But the requirement tor source code is not always practical especially for business software. A new approach was presented to detect statically the potential buffer overflow vulnerabilities in the binary code of software. The binary code was translated into assembly code without the lose of the information of string operation functions. The feature code abstract graph was constructed to generate more accurate constraint statements, and analyze the assembly code using the method of integer range constraint. After getting the elementary report on suspicious code where buffer overflows possibly happen, the control flow sensitive analysis using program dependence graph was done to decrease the rate of false positive. A prototype was implemented which demonstrates the feasibility and efficiency of the new approach.展开更多
A flexible field programmable gate array based radar signal processor is presented. The radar signal processor mainly consists of five functional modules: radar system timer, binary phase coded pulse compression(PC...A flexible field programmable gate array based radar signal processor is presented. The radar signal processor mainly consists of five functional modules: radar system timer, binary phase coded pulse compression(PC), moving target detection (MTD), constant false alarm rate (CFAR) and target dots processing. Preliminary target dots information is obtained in PC, MTD, and CFAR modules and Nios I! CPU is used for target dots combination and false sidelobe target removing. Sys- tem on programmable chip (SOPC) technique is adopted in the system in which SDRAM is used to cache data. Finally, a FPGA-based binary phase coded radar signal processor is realized and simula- tion result is given.展开更多
In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of inform...In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of information technology.A matrix decoding method is proposed in this paper.The method is a universal data reconstruction scheme for erasure codes over binary fields.Besides a pre-judgment that whether errors can be recovered,the method can rebuild sectors of loss data on a fault-tolerant storage system constructed by erasure codes for disk errors.Data reconstruction process of the new method has simple and clear steps,so it is beneficial for implementation of computer codes.And more,it can be applied to other non-binary fields easily,so it is expected that the method has an extensive application in the future.展开更多
Cyclic codes form an important class of codes. They have very interesting algebraic structure. Furthermore, they are equivalent to many important codes, such as binary Hamming codes, Golay codes and BCH codes. Minimal...Cyclic codes form an important class of codes. They have very interesting algebraic structure. Furthermore, they are equivalent to many important codes, such as binary Hamming codes, Golay codes and BCH codes. Minimal codewords in linear codes are widely used in constructing decoding algorithms and studying linear secret sharing scheme. In this paper, we show that in the binary cyclic code all of the codewords are minimal, except 0 and 1. Then, we obtain a result about the number of minimal codewords in the binary cyclic codes.展开更多
Binary multi-view clustering has attracted intense attention from researchers due to its efficiency in handling large-scale datasets.However,previous clustering approaches suffer from at least two limitations.First,th...Binary multi-view clustering has attracted intense attention from researchers due to its efficiency in handling large-scale datasets.However,previous clustering approaches suffer from at least two limitations.First,they ignore correlations among the features of original data.As a result,the geometric consistency of data is not preserved in the to-be-learnt binary representation space.Second,redundant and noisy features mixed in original data inevitably limit the ultimate clustering performance.In light of this,we propose a novel discriminative binary multi-view clustering(DBMVC)method to address the issues.Specifically,the proposed DBMVC first maps original data onto the Hamming space to obtain corresponding binary codes,which can effectively reduce the computational complexity and storage costs in the following steps.To enable our method to select useful features from original data and get a discriminative representation,the-norm is used to constrain the feature projection matrix.In addition,a graph regularization term is further introduced to preserve the local manifold structure of the learned binary representation.Finally,an alternative iterative optimization algorithm is designed to solve the optimization problems of the objective function.Comprehensive experiments on six large-scale multi-view datasets validate that the proposed DBMVC markedly outperforms other state-of-the-art methods in terms of effectiveness and efficiency.展开更多
Binary analysis, as an important foundational technology, provides support for numerous applications in the fields of software engineering and security research. With the continuous expansion of software scale and the...Binary analysis, as an important foundational technology, provides support for numerous applications in the fields of software engineering and security research. With the continuous expansion of software scale and the complex evolution of software architecture, binary analysis technology is facing new challenges. To break through existing bottlenecks, researchers have applied artificial intelligence (AI) technology to the understanding and analysis of binary code. The core lies in characterizing binary code, i.e., how to use intelligent methods to generate representation vectors containing semantic information for binary code, and apply them to multiple downstream tasks of binary analysis. In this paper, we provide a comprehensive survey of recent advances in binary code representation technology, and introduce the workflow of existing research in two parts, i.e., binary code feature selection methods and binary code feature embedding methods. The feature selection section includes mainly two parts: definition and classification of features, and feature construction. First, the abstract definition and classification of features are systematically explained, and second, the process of constructing specific representations of features is introduced in detail. In the feature embedding section, based on the different intelligent semantic understanding models used, the embedding methods are classified into four categories based on the usage of text-embedding models and graph-embedding models. Finally, we summarize the overall development of existing research and provide prospects for some potential research directions related to binary code representation technology.展开更多
Binary Code Similarity Detection(BCSD)is vital for vulnerability discovery,malware detection,and software security,especially when source code is unavailable.Yet,it faces challenges from semantic loss,recompilation va...Binary Code Similarity Detection(BCSD)is vital for vulnerability discovery,malware detection,and software security,especially when source code is unavailable.Yet,it faces challenges from semantic loss,recompilation variations,and obfuscation.Recent advances in artificial intelligence—particularly natural language processing(NLP),graph representation learning(GRL),and large language models(LLMs)—have markedly improved accuracy,enabling better recognition of code variants and deeper semantic understanding.This paper presents a comprehensive review of 82 studies published between 1975 and 2025,systematically tracing the historical evolution of BCSD and analyzing the progressive incorporation of artificial intelligence(AI)techniques.Particular emphasis is placed on the role of LLMs,which have recently emerged as transformative tools in advancing semantic representation and enhancing detection performance.The review is organized around five central research questions:(1)the chronological development and milestones of BCSD;(2)the construction of AI-driven technical roadmaps that chart methodological transitions;(3)the design and implementation of general analytical workflows for binary code analysis;(4)the applicability,strengths,and limitations of LLMs in capturing semantic and structural features of binary code;and(5)the persistent challenges and promising directions for future investigation.By synthesizing insights across these dimensions,the study demonstrates how LLMs reshape the landscape of binary code analysis,offering unprecedented opportunities to improve accuracy,scalability,and adaptability in real-world scenarios.This review not only bridges a critical gap in the existing literature but also provides a forward-looking perspective,serving as a valuable reference for researchers and practitioners aiming to advance AI-powered BCSD methodologies and applications.展开更多
Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Althoug...Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC.展开更多
Pulse Doppler(PD) fuze is widely used in current battlefield. However, with the threat of repeater jamming, especially digital radio frequency memory technology, the deficiency in the anti-repeater jamming of a tradit...Pulse Doppler(PD) fuze is widely used in current battlefield. However, with the threat of repeater jamming, especially digital radio frequency memory technology, the deficiency in the anti-repeater jamming of a traditional PD fuze increasingly emerges. Therefore, a repeater jamming suppression method for a PD fuze based on identity(ID) recognition and chaotic encryption is proposed. Every fuze has its own ID which is encrypted with different chaotic binary sequences in every pulse period of the transmitted signal. The thumbtack-shaped ambiguity function shows a good resolution and distance cutoff characteristic. The ability of anti-repeater jamming is emphatically analyzed, and the results at different signal-to-noise ratio(SNR) show a strong anti-repeater jamming ability and range resolution that the proposed method possesses. Furthermore, the anti-repeater jamming ability is influenced by processing gain, bit error rate(BER) and correlation function. The simulation result validates the theoretical analysis, it shows the proposed method can significantly improve the anti-repeater jamming ability of a PD fuze.展开更多
Remote sensing image registration is still a challenging task owing to the significant influence of nonlinear differences between remote sensing images.To solve this problem,this paper proposes a novel approach with r...Remote sensing image registration is still a challenging task owing to the significant influence of nonlinear differences between remote sensing images.To solve this problem,this paper proposes a novel approach with regard to feature-based remote sensing image registration.There are two key contributions:1)we bring forward an improved strategy of composite nonlinear diffusion filtering according to the scale factors in multi-scale space and 2)we design a gradually decreasing resolution of multi-scale pyramid space.And a binary code string is served as feature descriptors to improve matching efficiency.Extensive experiments of different categories of remote image datasets on feature extraction and feature registration are performed.The experimental results demonstrate the superiority of our proposed scheme compared with other classical algorithms in terms of correct matching ratio,accuracy and computation efficiency.展开更多
This paper describes a multi-threat real-time separating system for broadband anti-radiation missile seeker. It presents a method, with a dual-port memory as comparer, to perform PF and PW hardware real-time separatio...This paper describes a multi-threat real-time separating system for broadband anti-radiation missile seeker. It presents a method, with a dual-port memory as comparer, to perform PF and PW hardware real-time separation and to determine the time-of-arrival (TOA) by use of sequential difference histogram (SDIF). The method has been applied to practice, which has achieved good results.展开更多
Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes...Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes both decode decision and decode bypass engines for high throughput,and improves context model allocation for efficient external memory access.Based on the fact that the most possible symbol(MPS) branch is much simpler than the least possible symbol(LPS) branch,a newly organized decode decision engine consisting of two serially concatenated MPS branches and one LPS branch is proposed to achieve better parallelism at lower timing path cost.A look-ahead context index(ctxIdx) calculation mechanism is designed to provide the context model for the second MPS branch.A head-zero detector is proposed to improve the performance of the decode bypass engine according to UEGk encoding features.In addition,to lower the frequency of memory access,we reorganize the context models in external memory and use three circular buffers to cache the context models,neighboring information,and bit stream,respectively.A pre-fetching mechanism with a prediction scheme is adopted to load the corresponding content to a circular buffer to hide external memory latency.Experimental results show that our design can operate at 250 MHz with a 20.71k gate count in SMIC18 silicon technology,and that it achieves an average data decoding rate of 1.5 bins/cycle.展开更多
Optimization of mapping rule of bit-interleaved Turbo coded modulation with 16 quadrature amplitude modulation (QAM) is investigated based on different impacts of various encoded bits sequence on Turbo decoding perfor...Optimization of mapping rule of bit-interleaved Turbo coded modulation with 16 quadrature amplitude modulation (QAM) is investigated based on different impacts of various encoded bits sequence on Turbo decoding performance. Furthermore, bit-interleaved in-phase and quadrature phase (I-Q) Turbo coded modulation scheme are designed similarly with I-Q trellis coded modulation (TCM). Through performance evaluation and analysis, it can be seen that the novel mapping rule outperforms traditional one and the I-Q Turbo coded modulation can not achieve good performance as expected. Therefore, there is not obvious advantage in using I-Q method in bit-interleaved Turbo coded modulation.展开更多
An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependenci...An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependencies in CABAC decoding process.An efficiency model of CABAC decoding pipeline is derived according to the analysis of a common pipeline.Based on that,several adaptive strategies are provided.The pipelining scheme with these strategies can be adaptive to different types of syntax elements(SEs) and the pipeline will not stall during decoding process when these strategies are adopted.In addition,the decoder proposed can fully support H.264/AVC high4:2:2 profile and the experimental results show that the efficiency of decoder is much higher than other architectures with one engine.Taking both performance and cost into consideration,our design makes a good tradeoff compared with other work and it is sufficient for HD real-time decoding.展开更多
Evolutionary algorithm is applied for distillation separation sequence optimization synthesis problems with combination explosion. The binary tree data structure is used to describe the distillation separation sequenc...Evolutionary algorithm is applied for distillation separation sequence optimization synthesis problems with combination explosion. The binary tree data structure is used to describe the distillation separation sequence, and it is directly applied as the coding method. Genetic operators, which ensure to prohibit illegal filial generations completely, are designed by using the method of graph theory. The crossover operator based on a single parent or two parents is designed successfully. The example shows that the average ratio of search space from evolutionary algorithm with two-parent genetic operation is lower, whereas the rate of successful minimizations from evolutionary algorithm with single parent genetic operation is higher.展开更多
The self-orthogonal condition is analyzed with respect to symplectic inner product for the binary code that generated by [B1 I B2 B3],where Bi are the binary n ×n matrices,I is an identity matrix.By the use of th...The self-orthogonal condition is analyzed with respect to symplectic inner product for the binary code that generated by [B1 I B2 B3],where Bi are the binary n ×n matrices,I is an identity matrix.By the use of the binary codes that generated by [B1 I B2 B2B1^T],asymptotic good[[2n ,n ]]additive quantum codes are obtained.展开更多
A new description of the additive quantum codes is presented and a new way to construct good quantum codes [[n, k, d]] is given by using classical binary codes with specific properties in F2^3n. We show several conseq...A new description of the additive quantum codes is presented and a new way to construct good quantum codes [[n, k, d]] is given by using classical binary codes with specific properties in F2^3n. We show several consequences and examples of good quantum codes by using our new description of the additive quantum codes.展开更多
In order toovercomethe poor local search ability of genetic algorithm, resulting in the basic genetic algorithm is time-consuming, and low search abilityin the late evolutionary, we use thegray coding instead ofbinary...In order toovercomethe poor local search ability of genetic algorithm, resulting in the basic genetic algorithm is time-consuming, and low search abilityin the late evolutionary, we use thegray coding instead ofbinary codingatthebeginning of the coding;we use multi-point crossoverto replace the originalsingle-point crossoveroperation.Finally, theexperimentshows that the improved genetic algorithmnot only has a strong search capability, but also thestability has been effectively improved.展开更多
文摘Monogenic binary coding (MBC) have been known to be effective for local feature extraction, while sparse or collaborative representation based classification (CRC) has shown interesting results in robust face recognition. In this paper, a novel face recognition algorithm of fusing MBC and CRC named M-CRC is proposed; in which the dimensionality problem is resolved by projection matrix. The proposed algorithm is evaluated on benchmark face databases, including AR, PolyU-NIR and CAS-PEAL. The results indicate a significant increase in the performance when compared with state-of-the-art face recognition methods.
文摘Multi-firmware comparison techniques can improve efficiency when auditing firmwares in bulk.How-ever,the problem of matching functions between multiple firmwares has not been studied before.This paper proposes a multi-firmware comparison method based on evolutionary algorithms and trusted base points.We first model the multi-firmware comparison as a multi-sequence matching problem.Then,we propose an adaptation function and a population generation method based on trusted base points.Finally,we apply an evolutionary algorithm to find the optimal result.At the same time,we design the similarity of matching results as an evaluation metric to measure the effect of multi-firmware comparison.The experiments show that the proposed method outperforms Bindiff and the string-based method.Precisely,the similarity between the matching results of the proposed method and Bindiff matching results is 61%,and the similarity between the matching results of the proposed method and the string-based method is 62.8%.By sampling and manual verification,the accuracy of the matching results of the proposed method can be about 66.4%.
文摘Most solutions for detecting buffer overflow are based on source code. But the requirement tor source code is not always practical especially for business software. A new approach was presented to detect statically the potential buffer overflow vulnerabilities in the binary code of software. The binary code was translated into assembly code without the lose of the information of string operation functions. The feature code abstract graph was constructed to generate more accurate constraint statements, and analyze the assembly code using the method of integer range constraint. After getting the elementary report on suspicious code where buffer overflows possibly happen, the control flow sensitive analysis using program dependence graph was done to decrease the rate of false positive. A prototype was implemented which demonstrates the feasibility and efficiency of the new approach.
基金Supported by the Ministerial Level Advanced Research Foundation (SP240012)
文摘A flexible field programmable gate array based radar signal processor is presented. The radar signal processor mainly consists of five functional modules: radar system timer, binary phase coded pulse compression(PC), moving target detection (MTD), constant false alarm rate (CFAR) and target dots processing. Preliminary target dots information is obtained in PC, MTD, and CFAR modules and Nios I! CPU is used for target dots combination and false sidelobe target removing. Sys- tem on programmable chip (SOPC) technique is adopted in the system in which SDRAM is used to cache data. Finally, a FPGA-based binary phase coded radar signal processor is realized and simula- tion result is given.
基金supported by the National Natural Science Foundation of China under Grant No.61501064Sichuan Provincial Science and Technology Project under Grant No.2016GZ0122
文摘In the process of encoding and decoding,erasure codes over binary fields,which just need AND operations and XOR operations and therefore have a high computational efficiency,are widely used in various fields of information technology.A matrix decoding method is proposed in this paper.The method is a universal data reconstruction scheme for erasure codes over binary fields.Besides a pre-judgment that whether errors can be recovered,the method can rebuild sectors of loss data on a fault-tolerant storage system constructed by erasure codes for disk errors.Data reconstruction process of the new method has simple and clear steps,so it is beneficial for implementation of computer codes.And more,it can be applied to other non-binary fields easily,so it is expected that the method has an extensive application in the future.
文摘Cyclic codes form an important class of codes. They have very interesting algebraic structure. Furthermore, they are equivalent to many important codes, such as binary Hamming codes, Golay codes and BCH codes. Minimal codewords in linear codes are widely used in constructing decoding algorithms and studying linear secret sharing scheme. In this paper, we show that in the binary cyclic code all of the codewords are minimal, except 0 and 1. Then, we obtain a result about the number of minimal codewords in the binary cyclic codes.
基金supported by the National Natural Science Foundation of China under Grant Nos.62476258,62076228,and 62325604.
文摘Binary multi-view clustering has attracted intense attention from researchers due to its efficiency in handling large-scale datasets.However,previous clustering approaches suffer from at least two limitations.First,they ignore correlations among the features of original data.As a result,the geometric consistency of data is not preserved in the to-be-learnt binary representation space.Second,redundant and noisy features mixed in original data inevitably limit the ultimate clustering performance.In light of this,we propose a novel discriminative binary multi-view clustering(DBMVC)method to address the issues.Specifically,the proposed DBMVC first maps original data onto the Hamming space to obtain corresponding binary codes,which can effectively reduce the computational complexity and storage costs in the following steps.To enable our method to select useful features from original data and get a discriminative representation,the-norm is used to constrain the feature projection matrix.In addition,a graph regularization term is further introduced to preserve the local manifold structure of the learned binary representation.Finally,an alternative iterative optimization algorithm is designed to solve the optimization problems of the objective function.Comprehensive experiments on six large-scale multi-view datasets validate that the proposed DBMVC markedly outperforms other state-of-the-art methods in terms of effectiveness and efficiency.
文摘Binary analysis, as an important foundational technology, provides support for numerous applications in the fields of software engineering and security research. With the continuous expansion of software scale and the complex evolution of software architecture, binary analysis technology is facing new challenges. To break through existing bottlenecks, researchers have applied artificial intelligence (AI) technology to the understanding and analysis of binary code. The core lies in characterizing binary code, i.e., how to use intelligent methods to generate representation vectors containing semantic information for binary code, and apply them to multiple downstream tasks of binary analysis. In this paper, we provide a comprehensive survey of recent advances in binary code representation technology, and introduce the workflow of existing research in two parts, i.e., binary code feature selection methods and binary code feature embedding methods. The feature selection section includes mainly two parts: definition and classification of features, and feature construction. First, the abstract definition and classification of features are systematically explained, and second, the process of constructing specific representations of features is introduced in detail. In the feature embedding section, based on the different intelligent semantic understanding models used, the embedding methods are classified into four categories based on the usage of text-embedding models and graph-embedding models. Finally, we summarize the overall development of existing research and provide prospects for some potential research directions related to binary code representation technology.
文摘Binary Code Similarity Detection(BCSD)is vital for vulnerability discovery,malware detection,and software security,especially when source code is unavailable.Yet,it faces challenges from semantic loss,recompilation variations,and obfuscation.Recent advances in artificial intelligence—particularly natural language processing(NLP),graph representation learning(GRL),and large language models(LLMs)—have markedly improved accuracy,enabling better recognition of code variants and deeper semantic understanding.This paper presents a comprehensive review of 82 studies published between 1975 and 2025,systematically tracing the historical evolution of BCSD and analyzing the progressive incorporation of artificial intelligence(AI)techniques.Particular emphasis is placed on the role of LLMs,which have recently emerged as transformative tools in advancing semantic representation and enhancing detection performance.The review is organized around five central research questions:(1)the chronological development and milestones of BCSD;(2)the construction of AI-driven technical roadmaps that chart methodological transitions;(3)the design and implementation of general analytical workflows for binary code analysis;(4)the applicability,strengths,and limitations of LLMs in capturing semantic and structural features of binary code;and(5)the persistent challenges and promising directions for future investigation.By synthesizing insights across these dimensions,the study demonstrates how LLMs reshape the landscape of binary code analysis,offering unprecedented opportunities to improve accuracy,scalability,and adaptability in real-world scenarios.This review not only bridges a critical gap in the existing literature but also provides a forward-looking perspective,serving as a valuable reference for researchers and practitioners aiming to advance AI-powered BCSD methodologies and applications.
基金supported by Key Laboratory of Cyberspace Security,Ministry of Education,China。
文摘Transformer-based models have significantly advanced binary code similarity detection(BCSD)by leveraging their semantic encoding capabilities for efficient function matching across diverse compilation settings.Although adversarial examples can strategically undermine the accuracy of BCSD models and protect critical code,existing techniques predominantly depend on inserting artificial instructions,which incur high computational costs and offer limited diversity of perturbations.To address these limitations,we propose AIMA,a novel gradient-guided assembly instruction relocation method.Our method decouples the detection model into tokenization,embedding,and encoding layers to enable efficient gradient computation.Since token IDs of instructions are discrete and nondifferentiable,we compute gradients in the continuous embedding space to evaluate the influence of each token.The most critical tokens are identified by calculating the L2 norm of their embedding gradients.We then establish a mapping between instructions and their corresponding tokens to aggregate token-level importance into instructionlevel significance.To maximize adversarial impact,a sliding window algorithm selects the most influential contiguous segments for relocation,ensuring optimal perturbation with minimal length.This approach efficiently locates critical code regions without expensive search operations.The selected segments are relocated outside their original function boundaries via a jump mechanism,which preserves runtime control flow and functionality while introducing“deletion”effects in the static instruction sequence.Extensive experiments show that AIMA reduces similarity scores by up to 35.8%in state-of-the-art BCSD models.When incorporated into training data,it also enhances model robustness,achieving a 5.9%improvement in AUROC.
基金National Natural Science Foundation of China under Grant No. 61973037 and No. 61673066。
文摘Pulse Doppler(PD) fuze is widely used in current battlefield. However, with the threat of repeater jamming, especially digital radio frequency memory technology, the deficiency in the anti-repeater jamming of a traditional PD fuze increasingly emerges. Therefore, a repeater jamming suppression method for a PD fuze based on identity(ID) recognition and chaotic encryption is proposed. Every fuze has its own ID which is encrypted with different chaotic binary sequences in every pulse period of the transmitted signal. The thumbtack-shaped ambiguity function shows a good resolution and distance cutoff characteristic. The ability of anti-repeater jamming is emphatically analyzed, and the results at different signal-to-noise ratio(SNR) show a strong anti-repeater jamming ability and range resolution that the proposed method possesses. Furthermore, the anti-repeater jamming ability is influenced by processing gain, bit error rate(BER) and correlation function. The simulation result validates the theoretical analysis, it shows the proposed method can significantly improve the anti-repeater jamming ability of a PD fuze.
基金supported by National Nature Science Foundation of China(Nos.61640412 and 61762052)the Natural Science Foundation of Jiangxi Province(No.20192BAB207021)the Science and Technology Research Projects of Jiangxi Province Education Department(Nos.GJJ170633 and GJJ170632).
文摘Remote sensing image registration is still a challenging task owing to the significant influence of nonlinear differences between remote sensing images.To solve this problem,this paper proposes a novel approach with regard to feature-based remote sensing image registration.There are two key contributions:1)we bring forward an improved strategy of composite nonlinear diffusion filtering according to the scale factors in multi-scale space and 2)we design a gradually decreasing resolution of multi-scale pyramid space.And a binary code string is served as feature descriptors to improve matching efficiency.Extensive experiments of different categories of remote image datasets on feature extraction and feature registration are performed.The experimental results demonstrate the superiority of our proposed scheme compared with other classical algorithms in terms of correct matching ratio,accuracy and computation efficiency.
文摘This paper describes a multi-threat real-time separating system for broadband anti-radiation missile seeker. It presents a method, with a dual-port memory as comparer, to perform PF and PW hardware real-time separation and to determine the time-of-arrival (TOA) by use of sequential difference histogram (SDIF). The method has been applied to practice, which has achieved good results.
基金Project supported by the National Natural Science Foundation of China(No.61100074)the Fundamental Research Funds for the Central Universities,China(No.2013QNA5008)
文摘Context-based adaptive binary arithmetic coding(CABAC) is the major entropy-coding algorithm employed in H.264/AVC.In this paper,we present a new VLSI architecture design for an H.264/AVC CABAC decoder,which optimizes both decode decision and decode bypass engines for high throughput,and improves context model allocation for efficient external memory access.Based on the fact that the most possible symbol(MPS) branch is much simpler than the least possible symbol(LPS) branch,a newly organized decode decision engine consisting of two serially concatenated MPS branches and one LPS branch is proposed to achieve better parallelism at lower timing path cost.A look-ahead context index(ctxIdx) calculation mechanism is designed to provide the context model for the second MPS branch.A head-zero detector is proposed to improve the performance of the decode bypass engine according to UEGk encoding features.In addition,to lower the frequency of memory access,we reorganize the context models in external memory and use three circular buffers to cache the context models,neighboring information,and bit stream,respectively.A pre-fetching mechanism with a prediction scheme is adopted to load the corresponding content to a circular buffer to hide external memory latency.Experimental results show that our design can operate at 250 MHz with a 20.71k gate count in SMIC18 silicon technology,and that it achieves an average data decoding rate of 1.5 bins/cycle.
文摘Optimization of mapping rule of bit-interleaved Turbo coded modulation with 16 quadrature amplitude modulation (QAM) is investigated based on different impacts of various encoded bits sequence on Turbo decoding performance. Furthermore, bit-interleaved in-phase and quadrature phase (I-Q) Turbo coded modulation scheme are designed similarly with I-Q trellis coded modulation (TCM). Through performance evaluation and analysis, it can be seen that the novel mapping rule outperforms traditional one and the I-Q Turbo coded modulation can not achieve good performance as expected. Therefore, there is not obvious advantage in using I-Q method in bit-interleaved Turbo coded modulation.
基金Supported by the National Natural Science Foundation of China(No.61076021)the National Basic Research Program of China(No.2009CB320903)China Postdoctoral Science Foundation(No.2012M511364)
文摘An adaptive pipelining scheme for H.264/AVC context-based adaptive binary arithmetic coding(CABAC) decoder for high definition(HD) applications is proposed to solve data hazard problems coming from the data dependencies in CABAC decoding process.An efficiency model of CABAC decoding pipeline is derived according to the analysis of a common pipeline.Based on that,several adaptive strategies are provided.The pipelining scheme with these strategies can be adaptive to different types of syntax elements(SEs) and the pipeline will not stall during decoding process when these strategies are adopted.In addition,the decoder proposed can fully support H.264/AVC high4:2:2 profile and the experimental results show that the efficiency of decoder is much higher than other architectures with one engine.Taking both performance and cost into consideration,our design makes a good tradeoff compared with other work and it is sufficient for HD real-time decoding.
文摘Evolutionary algorithm is applied for distillation separation sequence optimization synthesis problems with combination explosion. The binary tree data structure is used to describe the distillation separation sequence, and it is directly applied as the coding method. Genetic operators, which ensure to prohibit illegal filial generations completely, are designed by using the method of graph theory. The crossover operator based on a single parent or two parents is designed successfully. The example shows that the average ratio of search space from evolutionary algorithm with two-parent genetic operation is lower, whereas the rate of successful minimizations from evolutionary algorithm with single parent genetic operation is higher.
文摘The self-orthogonal condition is analyzed with respect to symplectic inner product for the binary code that generated by [B1 I B2 B3],where Bi are the binary n ×n matrices,I is an identity matrix.By the use of the binary codes that generated by [B1 I B2 B2B1^T],asymptotic good[[2n ,n ]]additive quantum codes are obtained.
基金The work was supported in part by Harbin Normal University's Natural Scientific fund items (KM2006-20 and KM2005-14)Educational department scientific Technology item (1151112)Postdoctorate's fund item (LRB-KY01043)Scientific Technology Brainstorm item in Hei Longjiang province
文摘A new description of the additive quantum codes is presented and a new way to construct good quantum codes [[n, k, d]] is given by using classical binary codes with specific properties in F2^3n. We show several consequences and examples of good quantum codes by using our new description of the additive quantum codes.
文摘In order toovercomethe poor local search ability of genetic algorithm, resulting in the basic genetic algorithm is time-consuming, and low search abilityin the late evolutionary, we use thegray coding instead ofbinary codingatthebeginning of the coding;we use multi-point crossoverto replace the originalsingle-point crossoveroperation.Finally, theexperimentshows that the improved genetic algorithmnot only has a strong search capability, but also thestability has been effectively improved.