In this paper, the 3-D Wavelet-Fractal coder was used to compress the hyperspectral remote sensing image, which is a combination of 3-D improved set partitioning in hierarchical trees (SPIHT) coding and 3-D fractal ...In this paper, the 3-D Wavelet-Fractal coder was used to compress the hyperspectral remote sensing image, which is a combination of 3-D improved set partitioning in hierarchical trees (SPIHT) coding and 3-D fractal coding. Hyperspectral image date cube was first translated by 3-D wavelet and the 3-D fractal compression ceding was applied to lowest frequency subband. The remaining coefficients of higher frequency sub-bands were encoding by 3-D improved SPIHT. We used the block set instead of the hierarchical trees to enhance SPIHT's flexibility. The classical eight kinds of affme transformations in 2-D fractal image compression were generalized to nineteen for the 3-D fractal image compression. The new compression method had been tested on MATLAB. The experiment results indicate that we can gain high compression ratios and the information loss is acceptable.展开更多
Abstract: The layered decoding algorithm has been widely used in the implementation of Low Density Parity Check (LDPC) decoders, due to its high convergence speed. However, the pipeline operation of the layered dec...Abstract: The layered decoding algorithm has been widely used in the implementation of Low Density Parity Check (LDPC) decoders, due to its high convergence speed. However, the pipeline operation of the layered decoder may introduce memory access conflicts, which heavily deteriorates the decoder throughput. To essentially deal with the issue of memory access conflicts,展开更多
The progressive edge-growth(PEG)al-gorithm is a general method to construct short low-density parity-check(LDPC)codes and it is a greedy method to place each edge with large girths.In order to improve the performance ...The progressive edge-growth(PEG)al-gorithm is a general method to construct short low-density parity-check(LDPC)codes and it is a greedy method to place each edge with large girths.In order to improve the performance of LDPC codes,many im-proved PEG(IPEG)algorithms employ multi metrics to select surviving edges in turn.In this paper,the pro-posed edges metric(EM)based on message-passing algorithm(MPA)is introduced to PEG algorithm and the proposed EM constrained PEG(EM-PEG)algo-rithm mainly considers the independence of message passing from different nodes in Tanner graph.The numerical results show that our EM-PEG algorithm brings better bit error rate(BER)performance gains to LDPC codes than the traditional PEG algorithm and the powerful multi-edge multi-metric constrained PEG algorithm(MM-PEGA)proposed recently.In ad-dition,the multi-edge EM constrained PEG(M-EM-PEG)algorithm which adopts multi-edge EM may fur-ther improve the BER performance.展开更多
The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic ...The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic test equipment (ATE) and the CUT (circuit under test) effectively, a novel VSPTIDR (variable shifting prefix-tail identifier reverse) code for test stimulus data compression is designed. The encoding scheme is defined and analyzed in detail, and the decoder is presented and discussed. While the probability of 0 bits in the test set is greater than 0.92, the compression ratio from VSPTIDR code is better than the frequency-directed run-length (FDR) code, which can be proved by theoretical analysis and experiments. And the on-chip area overhead of VSPTIDR decoder is about 15.75 % less than the FDR decoder.展开更多
Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability...Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability of success near 100% has been proposed, that performs operations 45√N times approximately. In this paper, a hybrid quantum VQ encoding algorithm between the classical method and the quantum algorithm is presented. The number of its operations is less than √N for most images, and it is more efficient than the pure quantum algorithm.展开更多
This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,t...This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method展开更多
This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. Th...This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. The proposed scheme enhances the compression ratio of the ACW(n) algorithm by dividing the binary sequence into a number of subsequences (s), each of them satisfying the condition that the number of decimal values (d) of the n-bit length characters is equal to or less than 256. Therefore, the new scheme is referred to as ACW(n, s), where n is the adaptive character wordlength and s is the number of subsequences. The new scheme was used to compress a number of text files from standard corpora. The obtained results demonstrate that the ACW(n, s) scheme achieves higher compression ratio than many widely used compression algorithms and it achieves a competitive performance compared to state-of-the-art compression tools.展开更多
Through a series of studies on arithmetic coding and arithmetic encryption, a novel image joint compression- encryption algorithm based on adaptive arithmetic coding is proposed. The contexts produced in the process o...Through a series of studies on arithmetic coding and arithmetic encryption, a novel image joint compression- encryption algorithm based on adaptive arithmetic coding is proposed. The contexts produced in the process of image compression are modified by keys in order to achieve image joint compression encryption. Combined with the bit-plane coding technique, the discrete wavelet transform coefficients in different resolutions can be encrypted respectively with different keys, so that the resolution selective encryption is realized to meet different application needs. Zero-tree coding is improved, and adaptive arithmetic coding is introduced. Then, the proposed joint compression-encryption algorithm is simulated. The simulation results show that as long as the parameters are selected appropriately, the compression efficiency of proposed image joint compression-encryption algorithm is basically identical to that of the original image compression algorithm, and the security of the proposed algorithm is better than the joint encryption algorithm based on interval splitting.展开更多
Permeability is a vital property of rock mass, which is highly affected by tectonic stress and human engineering activities. A comprehensive monitoring of pore pressure and flow rate distributions inside the rock mass...Permeability is a vital property of rock mass, which is highly affected by tectonic stress and human engineering activities. A comprehensive monitoring of pore pressure and flow rate distributions inside the rock mass is very important to elucidate the permeability evolution mechanisms, which is difficult to realize in laboratory, but easy to be achieved in numerical simulations. Therefore, the particle flow code (PFC), a discrete element method, is used to simulate permeability behaviors of rock materials in this study. Owe to the limitation of the existed solid-fluid coupling algorithm in PFC, an improved flow-coupling algorithm is presented to better reflect the preferential flow in rock fractures. The comparative analysis is conducted between original and improved algorithm when simulating rock permeability evolution during triaxial compression, showing that the improved algorithm can better describe the experimental phenomenon. Furthermore, the evolution of pore pressure and flow rate distribution during the flow process are analyzed by using the improved algorithm. It is concluded that during the steady flow process in the fractured specimen, the pore pressure and flow rate both prefer transmitting through the fractures rather than rock matrix. Based on the results, fractures are divided into the following three types: I) fractures link to both the inlet and outlet, II) fractures only link to the inlet, and III) fractures only link to the outlet. The type I fracture is always the preferential propagating path for both the pore pressure and flow rate. For type II fractures, the pore pressure increases and then becomes steady. However, the flow rate increases first and begins to decrease after the flow reaches the stop end of the fracture and finally vanishes. There is no obvious pore pressure or flow rate concentration within type III fractures.展开更多
Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N)...Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.展开更多
This paper describes a new interleaver construction technique for turbo code.The technique searches as much as possible pseudo-random interleaving patterns under a certain condition using genetic algorithms(GAs).The n...This paper describes a new interleaver construction technique for turbo code.The technique searches as much as possible pseudo-random interleaving patterns under a certain condition using genetic algorithms(GAs).The new interleavers have the superiority of the S-random interleavers and this interleaver construction technique can reduce the time taken to generate pseudo-random interleaving patterns under a certain condition.The results obtained indicate that the new interleavers yield an equal to or better performance than the S-random interleavers.Compared to the S-random interleaver,this design requires a lower level of computational complexity.展开更多
For quantum sparse graph codes with stabilizer formalism, the unavoidable girth-four cycles in their Tanner graphs greatly degrade the iterative decoding performance with standard belief-propagation (BP) algorithm. ...For quantum sparse graph codes with stabilizer formalism, the unavoidable girth-four cycles in their Tanner graphs greatly degrade the iterative decoding performance with standard belief-propagation (BP) algorithm. In this paper, we present a jointly-check iterative algorithm suitable for decoding quantum sparse graph codes efficiently. Numerical simulations show that this modified method outperforms standard BP algorithm with an obvious performance improvement.展开更多
A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communica...A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects.展开更多
Fountain codes are considered to be a promising coding technique in underwater acoustic communication(UAC) which is challenged with the unique propagation features of the underwater acoustic channel and the harsh ma...Fountain codes are considered to be a promising coding technique in underwater acoustic communication(UAC) which is challenged with the unique propagation features of the underwater acoustic channel and the harsh marine environment. And Luby transform(LT) codes are the first codes fully realizing the digital fountain concept. However, in conventional LT encoding/decoding algorithms, due to the imperfect coverage(IC) of input symbols and short cycles in the generator matrix, stopping sets would occur and terminate the decoding. Thus, the recovery probability is reduced,high coding overhead is required and decoding delay is increased.These issues would be disadvantages while applying LT codes in underwater acoustic communication. Aimed at solving those issues, novel encoding/decoding algorithms are proposed. First,a doping and non-uniform selecting(DNS) encoding algorithm is proposed to solve the IC and the generation of short cycles problems. And this can reduce the probability of stopping sets occur during decoding. Second, a hybrid on the fly Gaussian elimination and belief propagation(OFG-BP) decoding algorithm is designed to reduce the decoding delay and efficiently utilize the information of stopping sets. Comparisons via Monte Carlo simulation confirm that the proposed schemes could achieve better overall decoding performances in comparison with conventional schemes.展开更多
A layered compression algorithm is presented which delivers spatial scalable encoded bit streams for remote video monitoring system. The complexity of the algorithm is modest and is well suited to real time implementa...A layered compression algorithm is presented which delivers spatial scalable encoded bit streams for remote video monitoring system. The complexity of the algorithm is modest and is well suited to real time implementation. Based on the layered compression algorithm, a codec system model is established. High-speed video compression can be realized with parallel data compression in this codec system. For image reconstruction, a prediction method using two nearest pix points is presented.展开更多
This paper reviewed the recent progress in the field of electrocardiogram (ECG) compression and compared the efficiency of some compression algorithms. By experimenting on the 500 cases of ECG signals from the ECG dat...This paper reviewed the recent progress in the field of electrocardiogram (ECG) compression and compared the efficiency of some compression algorithms. By experimenting on the 500 cases of ECG signals from the ECG database of China, it obtained the numeral indexes for each algorithm. Then by using the automatic diagnostic program developed by Shanghai Zhongshan Hospital, it also got the parameters of the reconstructed signals from linear approximation distance threshold (LADT), wavelet transform (WT), differential pulse code modulation (DPCM) and discrete cosine transform (DCT) algorithm. The results show that when the index of percent of root mean square difference(PRD) is less than 2.5%, the diagnostic agreement ratio is more than 90%; the index of PRD cannot completely show the damage of significant clinical information; the performance of wavelet algorithm exceeds other methods in the same compression ratio (CR). For the statistical result of the parameters of various methods and the clinical diagnostic results, it is of certain value and originality in the field of ECG compression research.展开更多
HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acq...HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acquire, store, analyze and index the data. The volume of the data is nearly up to hundreds of million bytes. Besides the hardware and software support, a great capacity of data storage, process and transfer is a more important problem. To deal with this problem, the key technology is data compression algorithm. In the paper, the data format in HT-7 is introduced first, then the data compression algorithm, LZO, being a kind of portable lossless data compression algorithm with ANSI C, is analyzed. This compression algorithm, which fits well with the data acquisition and distribution in the nuclear fusion experiment, offers a pretty fast compression and extremely fast decompression. At last the performance evaluation of LZO application in HT-7 is given.展开更多
Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w...Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.展开更多
It is known that Block Turbo Codes (BTC) can be nearly optimally decoded by Chase-II algorithm, in which the Least Reliable Bits (LRBs) are chosen empirically to keep the size of the test patterns (sequences) re...It is known that Block Turbo Codes (BTC) can be nearly optimally decoded by Chase-II algorithm, in which the Least Reliable Bits (LRBs) are chosen empirically to keep the size of the test patterns (sequences) relatively small and to reduce the decoding complexity. While there are also other adaptive techniques, where the decoder's LRBs adapt to the external parameter of the decoder like SNR (Signal Noise Ratio) level, a novel adaptive algorithm for BTC based on the statistics of an internal variable of the decoder itself is proposed in this paper. Different from the previous reported results, it collects the statistics of the multiplicity of the candidate sequences, i.e., the number of the same candidate sequences with the same minimum squared Euclidean distance resulted from the decoding of test sequences. It is shown by Monte Carlo simulations that the proposed adaptive algorithm has only about 0.02dB coding loss but the average complexity of the proposed algorithm is about 42% less compared with Pyndiah's iterative decoding algorithm using the fixed LRBs parameter.展开更多
Based on mirror-blocks, a totally coded algorithm (TCA) for switched-current (SI) network analysis in frequency domain is presented. The algorithm is simple, available, and suitable for any swltched-current networ...Based on mirror-blocks, a totally coded algorithm (TCA) for switched-current (SI) network analysis in frequency domain is presented. The algorithm is simple, available, and suitable for any swltched-current networks. A basis of analysis and design for switched-current networks via this algorithm is provided.展开更多
基金National Natural Science Foundation of China (No.60975084)
文摘In this paper, the 3-D Wavelet-Fractal coder was used to compress the hyperspectral remote sensing image, which is a combination of 3-D improved set partitioning in hierarchical trees (SPIHT) coding and 3-D fractal coding. Hyperspectral image date cube was first translated by 3-D wavelet and the 3-D fractal compression ceding was applied to lowest frequency subband. The remaining coefficients of higher frequency sub-bands were encoding by 3-D improved SPIHT. We used the block set instead of the hierarchical trees to enhance SPIHT's flexibility. The classical eight kinds of affme transformations in 2-D fractal image compression were generalized to nineteen for the 3-D fractal image compression. The new compression method had been tested on MATLAB. The experiment results indicate that we can gain high compression ratios and the information loss is acceptable.
基金the National Natural Science Foundation of China,the National Key Basic Research Program of China,The authors would like to thank all project partners for their valuable contributions and feedbacks
文摘Abstract: The layered decoding algorithm has been widely used in the implementation of Low Density Parity Check (LDPC) decoders, due to its high convergence speed. However, the pipeline operation of the layered decoder may introduce memory access conflicts, which heavily deteriorates the decoder throughput. To essentially deal with the issue of memory access conflicts,
文摘The progressive edge-growth(PEG)al-gorithm is a general method to construct short low-density parity-check(LDPC)codes and it is a greedy method to place each edge with large girths.In order to improve the performance of LDPC codes,many im-proved PEG(IPEG)algorithms employ multi metrics to select surviving edges in turn.In this paper,the pro-posed edges metric(EM)based on message-passing algorithm(MPA)is introduced to PEG algorithm and the proposed EM constrained PEG(EM-PEG)algo-rithm mainly considers the independence of message passing from different nodes in Tanner graph.The numerical results show that our EM-PEG algorithm brings better bit error rate(BER)performance gains to LDPC codes than the traditional PEG algorithm and the powerful multi-edge multi-metric constrained PEG algorithm(MM-PEGA)proposed recently.In ad-dition,the multi-edge EM constrained PEG(M-EM-PEG)algorithm which adopts multi-edge EM may fur-ther improve the BER performance.
基金supported by the Shenzhen Government R&D Project under Grant No.JC200903160361A
文摘The test vector compression is a key technique to reduce IC test time and cost since the explosion of the test data of system on chip (SoC) in recent years. To reduce the bandwidth requirement between the automatic test equipment (ATE) and the CUT (circuit under test) effectively, a novel VSPTIDR (variable shifting prefix-tail identifier reverse) code for test stimulus data compression is designed. The encoding scheme is defined and analyzed in detail, and the decoder is presented and discussed. While the probability of 0 bits in the test set is greater than 0.92, the compression ratio from VSPTIDR code is better than the frequency-directed run-length (FDR) code, which can be proved by theoretical analysis and experiments. And the on-chip area overhead of VSPTIDR decoder is about 15.75 % less than the FDR decoder.
文摘Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability of success near 100% has been proposed, that performs operations 45√N times approximately. In this paper, a hybrid quantum VQ encoding algorithm between the classical method and the quantum algorithm is presented. The number of its operations is less than √N for most images, and it is more efficient than the pure quantum algorithm.
文摘This paper presents a new test data compression/decompression method for SoC testing,called hybrid run length codes. The method makes a full analysis of the factors which influence test parameters:compression ratio,test application time, and area overhead. To improve the compression ratio, the new method is based on variable-to-variable run length codes,and a novel algorithm is proposed to reorder the test vectors and fill the unspecified bits in the pre-processing step. With a novel on-chip decoder, low test application time and low area overhead are obtained by hybrid run length codes. Finally, an experimental comparison on ISCAS 89 benchmark circuits validates the proposed method
文摘This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. The proposed scheme enhances the compression ratio of the ACW(n) algorithm by dividing the binary sequence into a number of subsequences (s), each of them satisfying the condition that the number of decimal values (d) of the n-bit length characters is equal to or less than 256. Therefore, the new scheme is referred to as ACW(n, s), where n is the adaptive character wordlength and s is the number of subsequences. The new scheme was used to compress a number of text files from standard corpora. The obtained results demonstrate that the ACW(n, s) scheme achieves higher compression ratio than many widely used compression algorithms and it achieves a competitive performance compared to state-of-the-art compression tools.
基金supported by the Natural Science Foundation of Hainan Province, China (Grant No. 613155)
文摘Through a series of studies on arithmetic coding and arithmetic encryption, a novel image joint compression- encryption algorithm based on adaptive arithmetic coding is proposed. The contexts produced in the process of image compression are modified by keys in order to achieve image joint compression encryption. Combined with the bit-plane coding technique, the discrete wavelet transform coefficients in different resolutions can be encrypted respectively with different keys, so that the resolution selective encryption is realized to meet different application needs. Zero-tree coding is improved, and adaptive arithmetic coding is introduced. Then, the proposed joint compression-encryption algorithm is simulated. The simulation results show that as long as the parameters are selected appropriately, the compression efficiency of proposed image joint compression-encryption algorithm is basically identical to that of the original image compression algorithm, and the security of the proposed algorithm is better than the joint encryption algorithm based on interval splitting.
基金Project(BK20150005) supported by the Natural Science Foundation of Jiangsu Province for Distinguished Young Scholars, China Project(2015XKZD05) supported by the Fundamental Research Funds for the Central Universities, China
文摘Permeability is a vital property of rock mass, which is highly affected by tectonic stress and human engineering activities. A comprehensive monitoring of pore pressure and flow rate distributions inside the rock mass is very important to elucidate the permeability evolution mechanisms, which is difficult to realize in laboratory, but easy to be achieved in numerical simulations. Therefore, the particle flow code (PFC), a discrete element method, is used to simulate permeability behaviors of rock materials in this study. Owe to the limitation of the existed solid-fluid coupling algorithm in PFC, an improved flow-coupling algorithm is presented to better reflect the preferential flow in rock fractures. The comparative analysis is conducted between original and improved algorithm when simulating rock permeability evolution during triaxial compression, showing that the improved algorithm can better describe the experimental phenomenon. Furthermore, the evolution of pore pressure and flow rate distribution during the flow process are analyzed by using the improved algorithm. It is concluded that during the steady flow process in the fractured specimen, the pore pressure and flow rate both prefer transmitting through the fractures rather than rock matrix. Based on the results, fractures are divided into the following three types: I) fractures link to both the inlet and outlet, II) fractures only link to the inlet, and III) fractures only link to the outlet. The type I fracture is always the preferential propagating path for both the pore pressure and flow rate. For type II fractures, the pore pressure increases and then becomes steady. However, the flow rate increases first and begins to decrease after the flow reaches the stop end of the fracture and finally vanishes. There is no obvious pore pressure or flow rate concentration within type III fractures.
文摘Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.
基金Supported by the National Natural Science Foundation of China(60372057) the Key Open laboratory on Information Science and Engineering of Railway Transportation Ministry of Beijing Jiaotong University of China(KLISAE-0103)
文摘This paper describes a new interleaver construction technique for turbo code.The technique searches as much as possible pseudo-random interleaving patterns under a certain condition using genetic algorithms(GAs).The new interleavers have the superiority of the S-random interleavers and this interleaver construction technique can reduce the time taken to generate pseudo-random interleaving patterns under a certain condition.The results obtained indicate that the new interleavers yield an equal to or better performance than the S-random interleavers.Compared to the S-random interleaver,this design requires a lower level of computational complexity.
基金Project supported by the National Natural Science Foundation of China(Grant No.60972046)Grant from the National Defense Pre-Research Foundation of China
文摘For quantum sparse graph codes with stabilizer formalism, the unavoidable girth-four cycles in their Tanner graphs greatly degrade the iterative decoding performance with standard belief-propagation (BP) algorithm. In this paper, we present a jointly-check iterative algorithm suitable for decoding quantum sparse graph codes efficiently. Numerical simulations show that this modified method outperforms standard BP algorithm with an obvious performance improvement.
文摘A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects.
基金supported by the National Natural Science Foundation of China(61371099)the Fundamental Research Funds for the Central Universities of China(HEUCF150812/150810)
文摘Fountain codes are considered to be a promising coding technique in underwater acoustic communication(UAC) which is challenged with the unique propagation features of the underwater acoustic channel and the harsh marine environment. And Luby transform(LT) codes are the first codes fully realizing the digital fountain concept. However, in conventional LT encoding/decoding algorithms, due to the imperfect coverage(IC) of input symbols and short cycles in the generator matrix, stopping sets would occur and terminate the decoding. Thus, the recovery probability is reduced,high coding overhead is required and decoding delay is increased.These issues would be disadvantages while applying LT codes in underwater acoustic communication. Aimed at solving those issues, novel encoding/decoding algorithms are proposed. First,a doping and non-uniform selecting(DNS) encoding algorithm is proposed to solve the IC and the generation of short cycles problems. And this can reduce the probability of stopping sets occur during decoding. Second, a hybrid on the fly Gaussian elimination and belief propagation(OFG-BP) decoding algorithm is designed to reduce the decoding delay and efficiently utilize the information of stopping sets. Comparisons via Monte Carlo simulation confirm that the proposed schemes could achieve better overall decoding performances in comparison with conventional schemes.
文摘A layered compression algorithm is presented which delivers spatial scalable encoded bit streams for remote video monitoring system. The complexity of the algorithm is modest and is well suited to real time implementation. Based on the layered compression algorithm, a codec system model is established. High-speed video compression can be realized with parallel data compression in this codec system. For image reconstruction, a prediction method using two nearest pix points is presented.
文摘This paper reviewed the recent progress in the field of electrocardiogram (ECG) compression and compared the efficiency of some compression algorithms. By experimenting on the 500 cases of ECG signals from the ECG database of China, it obtained the numeral indexes for each algorithm. Then by using the automatic diagnostic program developed by Shanghai Zhongshan Hospital, it also got the parameters of the reconstructed signals from linear approximation distance threshold (LADT), wavelet transform (WT), differential pulse code modulation (DPCM) and discrete cosine transform (DCT) algorithm. The results show that when the index of percent of root mean square difference(PRD) is less than 2.5%, the diagnostic agreement ratio is more than 90%; the index of PRD cannot completely show the damage of significant clinical information; the performance of wavelet algorithm exceeds other methods in the same compression ratio (CR). For the statistical result of the parameters of various methods and the clinical diagnostic results, it is of certain value and originality in the field of ECG compression research.
基金The project supported by the Meg-Science Enineering Project of Chinese Acdemy of Sciences
文摘HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acquire, store, analyze and index the data. The volume of the data is nearly up to hundreds of million bytes. Besides the hardware and software support, a great capacity of data storage, process and transfer is a more important problem. To deal with this problem, the key technology is data compression algorithm. In the paper, the data format in HT-7 is introduced first, then the data compression algorithm, LZO, being a kind of portable lossless data compression algorithm with ANSI C, is analyzed. This compression algorithm, which fits well with the data acquisition and distribution in the nuclear fusion experiment, offers a pretty fast compression and extremely fast decompression. At last the performance evaluation of LZO application in HT-7 is given.
基金via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2023/R/1444).
文摘Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.
基金the National Natural Science Foundation of China under grants,NUAA research funding
文摘It is known that Block Turbo Codes (BTC) can be nearly optimally decoded by Chase-II algorithm, in which the Least Reliable Bits (LRBs) are chosen empirically to keep the size of the test patterns (sequences) relatively small and to reduce the decoding complexity. While there are also other adaptive techniques, where the decoder's LRBs adapt to the external parameter of the decoder like SNR (Signal Noise Ratio) level, a novel adaptive algorithm for BTC based on the statistics of an internal variable of the decoder itself is proposed in this paper. Different from the previous reported results, it collects the statistics of the multiplicity of the candidate sequences, i.e., the number of the same candidate sequences with the same minimum squared Euclidean distance resulted from the decoding of test sequences. It is shown by Monte Carlo simulations that the proposed adaptive algorithm has only about 0.02dB coding loss but the average complexity of the proposed algorithm is about 42% less compared with Pyndiah's iterative decoding algorithm using the fixed LRBs parameter.
文摘Based on mirror-blocks, a totally coded algorithm (TCA) for switched-current (SI) network analysis in frequency domain is presented. The algorithm is simple, available, and suitable for any swltched-current networks. A basis of analysis and design for switched-current networks via this algorithm is provided.