期刊文献+
共找到19篇文章
< 1 >
每页显示 20 50 100
Quantitative Comparative Study of the Performance of Lossless Compression Methods Based on a Text Data Model
1
作者 Namogo Silué Sié Ouattara +1 位作者 Mouhamadou Dosso Alain Clément 《Open Journal of Applied Sciences》 2024年第7期1944-1962,共19页
Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform... Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding. 展开更多
关键词 Arithmetic Coding BWT compression Ratio Comparative Study compression Techniques Shannon-Fano HUFFMAN lossless compression LZW PERFORMANCE REDUNDANCY RLE Text Data Tunstall
在线阅读 下载PDF
PC-bzip2: a phase-space continuity-enhanced lossless compression algorithm for light-field microscopy data
2
作者 Changqing Su Zihan Lin +4 位作者 You Zhou Shuai Wang Yuhan Gao Chenggang Yan Bo Xiong 《Advanced Photonics Nexus》 2024年第3期40-49,共10页
Light-field fluorescence microscopy(LFM)is a powerful elegant compact method for long-term highspeed imaging of complex biological systems,such as neuron activities and rapid movements of organelles.LFM experiments ty... Light-field fluorescence microscopy(LFM)is a powerful elegant compact method for long-term highspeed imaging of complex biological systems,such as neuron activities and rapid movements of organelles.LFM experiments typically generate terabytes of image data and require a substantial amount of storage space.Some lossy compression algorithms have been proposed recently with good compression performance.However,since the specimen usually only tolerates low-power density illumination for longterm imaging with low phototoxicity,the image signal-to-noise ratio(SNR)is relatively low,which will cause the loss of some efficient position or intensity information using such lossy compression algorithms.Here,we propose a phase-space continuity-enhanced bzip2(PC-bzip2)lossless compression method for LFM data as a high-efficiency and open-source tool that combines graphics processing unit-based fast entropy judgment and multicore-CPU-based high-speed lossless compression.Our proposed method achieves almost 10%compression ratio improvement while keeping the capability of high-speed compression,compared with the original bzip2.We evaluated our method on fluorescence beads data and fluorescence staining cells data with different SNRs.Moreover,by introducing temporal continuity,our method shows the superior compression ratio on time series data of zebrafish blood vessels. 展开更多
关键词 light-field microscopy lossless compression phase space entropy judgment.
在线阅读 下载PDF
A LOSSLESS COMPRESSION ALGORITHM OF REMOTE SENSING IMAGE FOR SPACE APPLICATIONS 被引量:3
3
作者 Sui Yuping Yang Chengyu +3 位作者 Liu Yanjun Wang Jun Wei Zhonghui He Xin 《Journal of Electronics(China)》 2008年第5期647-651,共5页
A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of ... A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications. 展开更多
关键词 Remote sensing image lossless compression Rice entropy coder Integer Discrete Wavelet Transform (DWT)
在线阅读 下载PDF
A Global-Scale Image Lossless Compression Method Based on QTM Pixels 被引量:1
4
作者 SUN Wen-bin ZHAO Xue-sheng 《Journal of China University of Mining and Technology》 2006年第4期466-469,共4页
In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbo... In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbor pixels. An algorithm of ascertaining available QTM neighbors is also proposed. Then, the method for reducing space complexities in the procedure of predicting QTM pixel values is presented. Next, the structure for storing compressed QTM pixel is proposed. In the end, the experiment on comparing compression ratio of this method with other methods is carried out by using three wave bands data of 1 km resolution of NOAA images in China. The results indicate that: 1) the compression method performs better than any other, such as Run Length Coding, Arithmetic Coding, Huffman Cod- ing, etc; 2) the average size of compressed three wave band data based on the neighbor QTM pixel predictive model is 31.58% of the origin space requirements and 67.5% of Arithmetic Coding without predictive model. 展开更多
关键词 Quaternary Triangular Mesh lossless compression predictive model image entropy
在线阅读 下载PDF
Novel Lossless Compression Method Based on the Fourier Transform to Approximate the Kolmogorov Complexity of Elementary Cellular Automata
5
作者 Mohammed Terry-Jack 《Journal of Software Engineering and Applications》 2022年第10期359-383,共25页
We propose a novel, lossless compression algorithm, based on the 2D Discrete Fast Fourier Transform, to approximate the Algorithmic (Kolmogorov) Complexity of Elementary Cellular Automata. Fast Fourier transforms are ... We propose a novel, lossless compression algorithm, based on the 2D Discrete Fast Fourier Transform, to approximate the Algorithmic (Kolmogorov) Complexity of Elementary Cellular Automata. Fast Fourier transforms are widely used in image compression but their lossy nature exclude them as viable candidates for Kolmogorov Complexity approximations. For the first time, we present a way to adapt fourier transforms for lossless image compression. The proposed method has a very strong Pearsons correlation to existing complexity metrics and we further establish its consistency as a complexity metric by confirming its measurements never exceed the complexity of nothingness and randomness (representing the lower and upper limits of complexity). Surprisingly, many of the other methods tested fail this simple sanity check. A final symmetry-based test also demonstrates our method’s superiority over existing lossless compression metrics. All complexity metrics tested, as well as the code used to generate and augment the original dataset, can be found in our github repository: ECA complexity metrics<sup>1</sup>. 展开更多
关键词 Fast Fourier Transform lossless compression Elementary Cellular Automata Algorithmic Information Theory Kolmogorov Complexity
在线阅读 下载PDF
Four-dimensional matrix Walsh transform for lossless compression of color video 被引量:3
6
作者 LI Yu CHEN He-xin +1 位作者 SANG Ai-jun FENG Hua 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2010年第3期123-128,共6页
This article presents a coding method for the lossless compression of color video. In the proposed method, four-dimensional matrix Walsh transform (4D-M-Walsh-T) is used for color video coding. The whole n frames of... This article presents a coding method for the lossless compression of color video. In the proposed method, four-dimensional matrix Walsh transform (4D-M-Walsh-T) is used for color video coding. The whole n frames of a color video sequence are divided into '3D-blocks' which are image width (row component), image height (column component), image width (vertical component) in a color video sequence, and adjacency (depth component) of n frames (Y, U or V) of the video sequence. Similar to the method of 2D-Walsh transform, 4D-M-Walsh-T is 4D sub-matrices, and the size of each sub-matrix is n. The method can fully utilize correlations to encode for lossless compression and reduce the redundancy of color video, such as adjacent pixels in one frame or different frames of a video at the same time. Experimental results show that the proposed method can achieve higher lossless compression ratio (CR) for the color video sequence. 展开更多
关键词 4D-M-Walsh-T lossless compression compression ratio
原文传递
Fast lossless color image compression method using perceptron
7
作者 JiaKebin ZhangYanhua ZhuangXinyue 《Journal of Systems Engineering and Electronics》 SCIE EI CSCD 2004年第2期190-196,共7页
The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time m... The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time multimedia system. A novel lossless compression algorithm is researched. A low complexity predictive model is proposed using the correlation of pixels and color components. In the meantime, perceptron in neural network is used to rectify the prediction values adaptively. It makes the prediction residuals smaller and in a small dynamic scope. Also a color space transform is used and good decorrelation is obtained in our algorithm. The compared experimental results have shown that our algorithm has a noticeably better performance than traditional algorithms. Compared to the new standard JPEG-LS, this predictive model reduces its computational complexity. And its speed is faster than the JPEG-LS with negligible performance sacrifice. 展开更多
关键词 lossless compression PERCEPTRON prediction model correlation.
在线阅读 下载PDF
Medical image lossless compression based on combining an integer wavelet transform with DPCM
8
作者 Lihong ZHAO Yanan TIAN +1 位作者 Yonggang SHA Jinghua LI 《Frontiers of Electrical and Electronic Engineering in China》 CSCD 2009年第1期1-4,共4页
To improve the classical lossless compression of low efficiency,a method of image lossless compression with high efficiency is presented.Its theory and the algorithm implementation are introduced.The basic approach of... To improve the classical lossless compression of low efficiency,a method of image lossless compression with high efficiency is presented.Its theory and the algorithm implementation are introduced.The basic approach of medical image lossless compression is then briefly described.After analyzing and implementing differential plus code modulation(DPCM)in lossless compression,a new method of combining an integer wavelet transform with DPCM to compress medical images is discussed.The analysis and simulation results show that this new method is simpler and useful.Moreover,it has high compression ratio in medical image lossless compression. 展开更多
关键词 medical image integer wavelet transform differential plus code modulation(DPCM) lossless compression
原文传递
Separate Source Channel Coding Is Still What You Need:An LLM-Based Rethinking 被引量:3
9
作者 REN Tianqi LI Rongpeng +5 位作者 ZHAO Mingmin CHEN Xianfu LIU Guangyi YANG Yang ZHAO Zhifeng ZHANG Honggang 《ZTE Communications》 2025年第1期30-44,共15页
Along with the proliferating research interest in semantic communication(Sem Com),joint source channel coding(JSCC)has dominated the attention due to the widely assumed existence in efficiently delivering information ... Along with the proliferating research interest in semantic communication(Sem Com),joint source channel coding(JSCC)has dominated the attention due to the widely assumed existence in efficiently delivering information semantics.Nevertheless,this paper challenges the conventional JSCC paradigm and advocates for adopting separate source channel coding(SSCC)to enjoy a more underlying degree of freedom for optimization.We demonstrate that SSCC,after leveraging the strengths of the Large Language Model(LLM)for source coding and Error Correction Code Transformer(ECCT)complemented for channel coding,offers superior performance over JSCC.Our proposed framework also effectively highlights the compatibility challenges between Sem Com approaches and digital communication systems,particularly concerning the resource costs associated with the transmission of high-precision floating point numbers.Through comprehensive evaluations,we establish that assisted by LLM-based compression and ECCT-enhanced error correction,SSCC remains a viable and effective solution for modern communication systems.In other words,separate source channel coding is still what we need. 展开更多
关键词 separate source channel coding(SSCC) joint source channel coding(JSCC) end-to-end communication system Large Language Model(LLM) lossless text compression Error Correction Code Transformer(ECCT)
在线阅读 下载PDF
Seismic data compression based on integer wavelet transform 被引量:1
10
作者 WANG Xi-zhen(王喜珍) TENG Yun-tian(滕云田) +1 位作者 GAO Meng-tan(高孟潭) JIANG Hui(姜慧) 《Acta Seismologica Sinica(English Edition)》 CSCD 2004年第z1期123-128,共6页
Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied.... Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied. Comparing with the traditional algorithm, it can better improve the compression rate. CDF (2, n) biorthogonal wavelet family can lead to better compression ratio than other CDF family, SWE and CRF, which is owe to its capability in can- celing data redundancies and focusing data characteristics. CDF (2, n) family is suitable as the wavelet function of the lossless compression seismic data. 展开更多
关键词 lossless compression integer wavelet transform lifting scheme biorthogonal wavelet
在线阅读 下载PDF
An Approach to Integer Wavelet Transform for Medical Image Compression in PACS
11
作者 YANG Yan ZHANG Dong 《Wuhan University Journal of Natural Sciences》 CAS 2000年第2期204-206,共3页
We study an approach to integer wavelet transform for lossless compression of medical image in medical picture archiving and communication system (PACS). By lifting scheme a reversible integer wavelet transform is gen... We study an approach to integer wavelet transform for lossless compression of medical image in medical picture archiving and communication system (PACS). By lifting scheme a reversible integer wavelet transform is generated, which has the similar features with the corresponding biorthogonal wavelet transform. Experimental results of the method based on integer wavelet transform are given to show better performance and great applicable potentiality in medical image compression. 展开更多
关键词 Key words integer wavelet transform lifting scheme lossless compression PACS
在线阅读 下载PDF
SAR Image Compression Using Integer to Integer Transformations, Dimensionality Reduction, and High Correlation Modeling
12
作者 Sergey Voronin 《Journal of Computer and Communications》 2022年第2期19-32,共14页
In this document, we present new techniques for near-lossless and lossy compression of SAR imagery saved in PNG and binary formats of magnitude and phase data based on the application of transforms, dimensionality red... In this document, we present new techniques for near-lossless and lossy compression of SAR imagery saved in PNG and binary formats of magnitude and phase data based on the application of transforms, dimensionality reduction methods, and lossless compression. In particular, we discuss the use of blockwise integer to integer transforms, subsequent application of a dimensionality reduction method, and Burrows-Wheeler based lossless compression for the PNG data and the use of high correlation based modeling of sorted transform coefficients for the raw floating point magnitude and phase data. The gains exhibited are substantial over the application of different lossless methods directly on the data and competitive with existing lossy approaches. The methods presented are effective for large scale processing of similar data formats as they are heavily based on techniques which scale well on parallel architectures. 展开更多
关键词 SAR Imagery Integer-to-Integer Transforms Dimensionality Reduction High Correlation Modeling Lossy and lossless compression
在线阅读 下载PDF
Fast lossless images compression for synchrotron radiation facility using deep learning and hybrid architecture
13
作者 Zhang Min-xing Fu Shi-yuan +3 位作者 Gao Yu Cheng Yao-dong Abdulhafiz Ahmed Mustofa Chen Gang 《Radiation Detection Technology and Methods》 CSCD 2024年第4期1693-1703,共11页
Purpose The rapid growth in image data generated by high-energy photon sources poses significant challenges for storage and analysis,with conventional compression methods offering compression ratios often below 1.5.Me... Purpose The rapid growth in image data generated by high-energy photon sources poses significant challenges for storage and analysis,with conventional compression methods offering compression ratios often below 1.5.Methods This study introduces a novel,fast lossless compression method that combines deep learning with a hybrid computing architecture to overcome existing compression limitations.By employing a spatiotemporal learning network for predictive pixel value estimation and a residual quantization algorithm for efficient encoding.Results When benchmarked against the DeepZip algorithm,our approach demonstrates a 40%reduction in compression time while maintaining comparable compression ratios using identical computational resources.The implementation of a GPU+CPU+FPGA hybrid architecture further accelerates compression,reducing time by an additional 38%.Conclusions This study presents an innovative solution for efficiently storing and managing large-scale image data from synchrotron radiation facilities,harnessing the power of deep learning and advanced computing architectures. 展开更多
关键词 lossless compression Deep learning Heterogeneous architecture Synchrotron radiation image
原文传递
Compact and indexed representation for LiDAR point clouds
14
作者 Susana Ladra Miguel R.Luaces +1 位作者 José R.Paramá Fernando Silva-Coira 《Geo-Spatial Information Science》 CSCD 2024年第4期1035-1070,共36页
LiDAR devices are capable of acquiring clouds of 3D points reflecting any object around them,and adding additional attributes to each point such as color,position,time,etc.LiDAR datasets are usually large,and compress... LiDAR devices are capable of acquiring clouds of 3D points reflecting any object around them,and adding additional attributes to each point such as color,position,time,etc.LiDAR datasets are usually large,and compressed data formats(e.g.LAZ)have been proposed over the years.These formats are capable of transparently decompressing portions of the data,but they are not focused on solving general queries over the data.In contrast to that traditional approach,a new recent research line focuses on designing data structures that combine compression and indexation,allowing directly querying the compressed data.Compression is used to fit the data structure in main memory all the time,thus getting rid of disk accesses,and indexation is used to query the compressed data as fast as querying the uncompressed data.In this paper,we present the first data structure capable of losslessly compressing point clouds that have attributes and jointly indexing all three dimensions of space and attribute values.Our method is able to run range queries and attribute queries up to 100 times faster than previous methods. 展开更多
关键词 3D point clouds lossless compression INDEXING
原文传递
A multi-predictor based lossless ARGB texture compression algorithm and FPGA implementation
15
作者 Handong Mo 《Advances in Engineering Innovation》 2024年第4期62-68,共7页
In the fields of GPU and AI chip design,the frequent read and write operations on the color buffer data(ARGB),which are intensive in graphical and image access,significantly impact performance.There is a need for appl... In the fields of GPU and AI chip design,the frequent read and write operations on the color buffer data(ARGB),which are intensive in graphical and image access,significantly impact performance.There is a need for applications that require random access and only read small images once.To address this situation,this paper proposes an algorithm with lower modeling complexity,yet achieving near-complex implementation results,along with its FPGA implementation method.Through actual testing on multiple images,the average lossless compression rate reached 40.3%.With hardware acceleration,the execution efficiency of the algorithm was further improved,ensuring both compression rate and speed,thus confirming the effectiveness of the algorithm. 展开更多
关键词 lossless compression image compression texture compression FPGA
在线阅读 下载PDF
A survey and benchmark evaluation for neural-network-based lossless universal compressors toward multi-source data
16
作者 Hui SUN Huidong MA +7 位作者 Feng LING Haonan XIE Yongxia SUN Liping YI Meng YAN Cheng ZHONG Xiaoguang LIU Gang WANG 《Frontiers of Computer Science》 2025年第7期79-94,共16页
As various types of data grow explosively,largescale data storage,backup,and transmission become challenging,which motivates many researchers to propose efficient universal compression algorithms for multi-source data... As various types of data grow explosively,largescale data storage,backup,and transmission become challenging,which motivates many researchers to propose efficient universal compression algorithms for multi-source data.In recent years,due to the emergence of hardware acceleration devices such as GPUs,TPUs,DPUs,and FPGAs,the performance bottleneck of neural networks(NN)has been overcome,making NN-based compression algorithms increasingly practical and popular.However,the research survey for the NN-based universal lossless compressors has not been conducted yet,and there is also a lack of unified evaluation metrics.To address the above problems,in this paper,we present a holistic survey as well as benchmark evaluations.Specifically,i)we thoroughly investigate NNbased lossless universal compression algorithms toward multisource data and classify them into 3 types:static pre-training,adaptive,and semi-adaptive.ii)We unify 19 evaluation metrics to comprehensively assess the compression effect,resource consumption,and model performance of compressors.iii)We conduct experiments more than 4600 CPU/GPU hours to evaluate 17 state-of-the-art compressors on 28 real-world datasets across data types of text,images,videos,audio,etc.iv)We also summarize the strengths and drawbacks of NNbased lossless data compressors and discuss promising research directions.We summarize the results as the NN-based Lossless Compressors Benchmark(NNLCB,See fahaihi.github.io/NNLCB website),which will be updated and maintained continuously in the future. 展开更多
关键词 lossless compression benchmark evaluation universal compressors neural networks deep learning
原文传递
Reversible Natural Language Watermarking Using Synonym Substitution and Arithmetic Coding 被引量:8
17
作者 Lingyun Xiang Yan Li +2 位作者 Wei Hao Peng Yang Xiaobo Shen 《Computers, Materials & Continua》 SCIE EI 2018年第6期541-559,共19页
For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitutio... For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity. 展开更多
关键词 Arithmetic coding synonym substitution lossless compression reversible watermarking.
在线阅读 下载PDF
Big data compression processing and verification based on Hive for smart substation 被引量:3
18
作者 Zhijian QU Ge CHEN 《Journal of Modern Power Systems and Clean Energy》 SCIE EI 2015年第3期440-446,共7页
The capacity and the scale of smart substation are expanding constantly,with the characteristics of information digitization and automation,leading to a quantitative trend of data.Aiming at the existing processing sho... The capacity and the scale of smart substation are expanding constantly,with the characteristics of information digitization and automation,leading to a quantitative trend of data.Aiming at the existing processing shortages in the big data processing,the query and analysis of smart substation,a data compression processing method is proposed for analyzing smart substation and Hive.Experimental results show that the compression ratio and query time of RCFile storage format are better than those of TextFile and SequenceFile.The query efficiency is improved for data compressed by Deflate,Gzip and Lzo compression formats.The results verify the correctness of adjacent speedup defined as the index of cluster efficiency.Results also prove that the method has a significant theoretical and practical value for big data processing of smart substation. 展开更多
关键词 Hive Smart substation lossless compression
原文传递
Clustering and presorting for parallel burrows wheeler-based compression
19
作者 Sergey Voronin Eugene Borovikov Raqibul Hasan 《International Journal of Modeling, Simulation, and Scientific Computing》 EI 2021年第6期75-88,共14页
We describe practical improvements for parallel BWT-based lossless compressors frequently utilized in modern day big data applications.We propose a clustering-based data permutation approach for improving compression... We describe practical improvements for parallel BWT-based lossless compressors frequently utilized in modern day big data applications.We propose a clustering-based data permutation approach for improving compression ratio for data with significant alphabet variation along with a faster string sorting approach based on the application of the O(n)complexity counting sort with permutation reindexing. 展开更多
关键词 lossless data compression Burrows–Wheeler transform data permutation fast string sorting
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部