期刊文献+
共找到9篇文章
< 1 >
每页显示 20 50 100
Soft Decoding Scheme of Convolution Code Combined with Huffman Coding
1
作者 郭东亮 陈小蔷 吴乐南 《Journal of Southeast University(English Edition)》 EI CAS 2002年第3期208-211,共4页
This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman codi... This paper proposes a modification of the soft output Viterbi decoding algorithm (SOVA) which combines convolution code with Huffman coding. The idea is to extract the bit probability information from the Huffman coding and use it to compute the a priori source information which can be used when the channel environment is bad. The suggested scheme does not require changes on the transmitter side. Compared with separate decoding systems, the gain in signal to noise ratio is about 0 5-1.0 dB with a limi... 展开更多
关键词 soft output Viterbi decoding a priori information huffman coding convolution code
在线阅读 下载PDF
JOINT SOURCE-CHANNEL DECODING OF HUFFMAN CODES WITH LDPC CODES 被引量:1
2
作者 Mei Zhonghui Wu Lenan 《Journal of Electronics(China)》 2006年第6期806-809,共4页
In this paper, we present a Joint Source-Channel Decoding algorithm (JSCD) for Low-Density Parity Check (LDPC) codes by modifying the Sum-Product Algorithm (SPA) to account for the source redun-dancy, which results fr... In this paper, we present a Joint Source-Channel Decoding algorithm (JSCD) for Low-Density Parity Check (LDPC) codes by modifying the Sum-Product Algorithm (SPA) to account for the source redun-dancy, which results from the neighbouring Huffman coded bits. Simulations demonstrate that in the presence of source redundancy, the proposed algorithm gives better performance than the Separate Source and Channel Decoding algorithm (SSCD). 展开更多
关键词 Low-Density Parity Check codes (LDPC) Variable Length Codes (VLC) huffman code Sum-Product Algorithm(SPA) Joint Source-Channel Decoding (JSCD)
在线阅读 下载PDF
Interpolation-Based Reversible Data Hiding in Encrypted Audio with Scalable Embedding Capacity
3
作者 Yuan-Yu Tsai Alfrindo Lin +1 位作者 Wen-Ting Jao Yi-Hui Chen 《Computers, Materials & Continua》 2025年第7期681-697,共17页
With the rapid expansion of multimedia data,protecting digital information has become increasingly critical.Reversible data hiding offers an effective solution by allowing sensitive information to be embedded in multi... With the rapid expansion of multimedia data,protecting digital information has become increasingly critical.Reversible data hiding offers an effective solution by allowing sensitive information to be embedded in multimedia files while enabling full recovery of the original data after extraction.Audio,as a vital medium in communication,entertainment,and information sharing,demands the same level of security as images.However,embedding data in encrypted audio poses unique challenges due to the trade-offs between security,data integrity,and embedding capacity.This paper presents a novel interpolation-based reversible data hiding algorithm for encrypted audio that achieves scalable embedding capacity.By increasing sample density through interpolation,embedding opportunities are significantly enhanced while maintaining encryption throughout the process.The method further integrates multiple most significant bit(multi-MSB)prediction and Huffman coding to optimize compression and embedding efficiency.Experimental results on standard audio datasets demonstrate the proposed algorithm’s ability to embed up to 12.47 bits per sample with over 9.26 bits per sample available for pure embedding capacity,while preserving full reversibility.These results confirm the method’s suitability for secure applications that demand high embedding capacity and perfect reconstruction of original audio.This work advances reversible data hiding in encrypted audio by offering a secure,efficient,and fully reversible data hiding framework. 展开更多
关键词 Reversible data hiding encrypted audio INTERPOLATION sampling multi-MSB prediction huffman coding
在线阅读 下载PDF
Huffman-Code-Based Ternary Tree Transformation
4
作者 Qing-Song Li Huan-Yu Liu +2 位作者 Qingchun Wang Yu-Chun Wu Guo-Ping Guo 《Chinese Physics Letters》 2025年第10期1-12,共12页
Using a quantum computer to simulate fermionic systems requires fermion-to-qubit transformations.Usually,lower Pauli weight of transformations means shallower quantum circuits.Therefore,most existing transformations a... Using a quantum computer to simulate fermionic systems requires fermion-to-qubit transformations.Usually,lower Pauli weight of transformations means shallower quantum circuits.Therefore,most existing transformations aim for lower Pauli weight.However,in some cases,the circuit depth depends not only on the Pauli weight but also on the coefficients of the Hamiltonian terms.In order to characterize the circuit depth of these algorithms,we propose a new metric called weighted Pauli weight,which depends on Pauli weight and coefficients of Hamiltonian terms.To achieve smaller weighted Pauli weight,we introduce a novel transformation,Huffman-code-based ternary tree(HTT)transformation,which is built upon the classical Huffman code and tailored to different Hamiltonians.We tested various molecular Hamiltonians and the results show that the weighted Pauli weight of the HTT transformation is smaller than that of commonly used mappings.At the same time,the HTT transformation also maintains a relatively small Pauli weight.The mapping we designed reduces the circuit depth of certain Hamiltonian simulation algorithms,facilitating faster simulation of fermionic systems. 展开更多
关键词 quantum computer weighted pauli weightwhich huffman code based ternary tree transformation simulate fermionic systems fermion qubit transformations characterize circuit depth hamiltonian termsin fermionic systems
原文传递
A Bit-level Text Compression Scheme Based on the ACW Algorithm
5
作者 Hussein Al-Bahadili Shakir M. Hussain 《International Journal of Automation and computing》 EI 2010年第1期123-131,共9页
This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. Th... This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. The proposed scheme enhances the compression ratio of the ACW(n) algorithm by dividing the binary sequence into a number of subsequences (s), each of them satisfying the condition that the number of decimal values (d) of the n-bit length characters is equal to or less than 256. Therefore, the new scheme is referred to as ACW(n, s), where n is the adaptive character wordlength and s is the number of subsequences. The new scheme was used to compress a number of text files from standard corpora. The obtained results demonstrate that the ACW(n, s) scheme achieves higher compression ratio than many widely used compression algorithms and it achieves a competitive performance compared to state-of-the-art compression tools. 展开更多
关键词 Data compression bit-level text compression ACW(n) algorithm huffman coding adaptive coding
在线阅读 下载PDF
Reversible Data Hiding in Encrypted Images Based on Adaptive Prediction and Labeling
6
作者 Jiaohua Qin Zhibin He +1 位作者 Xuyu Xiang Neal N.Xiong 《Computers, Materials & Continua》 SCIE EI 2022年第11期3613-3628,共16页
Recently,reversible data hiding in encrypted images(RDHEI)based on pixel prediction has been a hot topic.However,existing schemes still employ a pixel predictor that ignores pixel changes in the diagonal direction dur... Recently,reversible data hiding in encrypted images(RDHEI)based on pixel prediction has been a hot topic.However,existing schemes still employ a pixel predictor that ignores pixel changes in the diagonal direction during prediction,and the pixel labeling scheme is inflexible.To solve these problems,this paper proposes reversible data hiding in encrypted images based on adaptive prediction and labeling.First,we design an adaptive gradient prediction(AGP),which uses eight adjacent pixels and combines four scanning methods(i.e.,horizontal,vertical,diagonal,and diagonal)for prediction.AGP can adaptively adjust the weight of the linear prediction model according to the weight of the edge attribute of the pixel,which improves the prediction ability of the predictor for complex images.At the same time,we adopt an adaptive huffman coding labeling scheme,which can adaptively generate huffman codes for labeling according to different images,effectively improving the scheme’s embedding performance on the dataset.The experimental results show that the algorithm has a higher embedding rate.The embedding rate on the test image Jetplane is 4.2102 bpp,and the average embedding rate on the image dataset Bossbase is 3.8625 bpp. 展开更多
关键词 Reversible data hiding adaptive gradient prediction huffman coding embedding capacity
在线阅读 下载PDF
Dynamic Reconfigurable Structure with Rate Distortion Optimization
7
作者 Lin Jiang Xueting Zhang +3 位作者 Rui Shan Xiaoyan Xie Xinchuang Liu Feilong He 《Journal of Harbin Institute of Technology(New Series)》 EI CAS 2020年第6期35-47,共13页
The Rate Distortion Optimization(RDO)algorithm in High Efficiency Video Coding(HEVC)has many iterations and a large number of calculations.In order to decrease the calculation time and meet the requirements of fast sw... The Rate Distortion Optimization(RDO)algorithm in High Efficiency Video Coding(HEVC)has many iterations and a large number of calculations.In order to decrease the calculation time and meet the requirements of fast switching of RDO algorithms of different scales,an RDO dynamic reconfigurable structure is proposed.First,the Quantization Parameter(QP)and bit rate values were loaded through an H⁃tree Configurable Network(HCN),and the execution status of the array was detected in real time.When the switching request of the RDO algorithm was detected,the corresponding configuration information was delivered.This self⁃reconfiguration implementation method improved the flexibility and utilization of hardware.Experimental results show that when the control bit width was only increased by 31.25%,the designed configuration network could increase the number of controllable processing units by 32 times,and the execution cycle was 50%lower than the same type of design.Compared with previous RDO algorithm,the RDO algorithm implemented on the reconfigurable array based on the configuration network had an average operating frequency increase of 12.5%and an area reduction of 56.4%. 展开更多
关键词 dynamic reconfiguration rate distortion optimization huffmancoding⁃like context switch video processing
在线阅读 下载PDF
Implementation of an Efficient Light Weight Security Algorithm for Energy-Constrained Wireless Sensor Nodes
8
作者 A. Saravanaselvan B. Paramasivan 《Circuits and Systems》 2016年第9期2234-2241,共9页
In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. ... In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. As a result, large error in the aggregate computed at the base station due to false sub aggregate values contributed by compromised nodes. When falsified event messages forwarded through intermediate nodes lead to wastage of their limited energy too. Since wireless sensor nodes are battery operated, it has low computational power and energy. In view of this, the algorithms designed for wireless sensor nodes should be such that, they extend the lifetime, use less computation and enhance security so as to enhance the network life time. This article presents Vernam Cipher cryptographic technique based data compression algorithm using huff man source coding scheme in order to enhance security and lifetime of the energy constrained wireless sensor nodes. In addition, this scheme is evaluated by using different processor based sensor node implementations and the results are compared against to other existing schemes. In particular, we present a secure light weight algorithm for the wireless sensor nodes which are consuming less energy for its operation. Using this, the entropy improvement is achieved to a greater extend. 展开更多
关键词 In-Network Data Aggregation Security Attacks Vernam Cipher Cryptographic Technique huffman Source coding ENTROPY
在线阅读 下载PDF
A Hybrid Compression Method for Compound Power Quality Disturbance Signals in Active Distribution Networks 被引量:1
9
作者 Xiangui Xiao Kaicheng Li Chen Zhao 《Journal of Modern Power Systems and Clean Energy》 SCIE EI CSCD 2023年第6期1902-1911,共10页
In the compression of massive compound power quality disturbance(PQD) signals in active distribution networks, the compression ratio(CR) and reconstruction error(RE) act as a pair of contradictory indicators, and trad... In the compression of massive compound power quality disturbance(PQD) signals in active distribution networks, the compression ratio(CR) and reconstruction error(RE) act as a pair of contradictory indicators, and traditional compression algorithms have difficulties in simultaneously satisfying a high CR and low RE. To improve the CR and reduce the RE, a hybrid compression method that combines a strong tracking Kalman filter(STKF), sparse decomposition, Huffman coding, and run-length coding is proposed in this study. This study first uses a sparse decomposition algorithm based on a joint dictionary to separate the transient component(TC) and the steady-state component(SSC) in the PQD. The TC is then compressed by wavelet analysis and by Huffman and runlength coding algorithms. For the SSC, values that are greater than the threshold are reserved, and the compression is finally completed. In addition, the threshold of the wavelet depends on the fading factor of the STKF to obtain a high CR. Experimental results of real-life signals measured by fault recorders in a dynamic simulation laboratory show that the CR of the proposed method reaches as high as 50 and the RE is approximately 1.6%, which are better than those of competing methods. These results demonstrate the immunity of the proposed method to the interference of Gaussian noise and sampling frequency. 展开更多
关键词 Signal compression power quality disturbance huffman coding run-length coding wavelet analysis sparse decomposition
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部