First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantizat...First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantization technique for image data compression.Thecombination cuts down the dimensions of vector coding.The size of the codebook can reasonablybe reduced.This method can reduce the computation complexity and pick up the vector codingprocess.Experiments using image processing system show that this method is very effective inthe field of image data compression.展开更多
Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability...Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability of success near 100% has been proposed, that performs operations 45√N times approximately. In this paper, a hybrid quantum VQ encoding algorithm between the classical method and the quantum algorithm is presented. The number of its operations is less than √N for most images, and it is more efficient than the pure quantum algorithm.展开更多
This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 bac...This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 backbone network,followed by adaptive average pooling to scale the features to a fixed length.Subsequently,product quantization with residuals(PQR)is applied to convert continuous feature vectors into discrete features representations,preserving essential information insensitive to image quality variations.The quantized and original features are concatenated and fed into a self-attention mechanism to capture keratitis-related features.Finally,these enhanced features are classified through a fully connected layer.Experiments on clinical low-quality(LQ)images show that ADK_FVQSAM achieves accuracies of 87.7%,81.9%,and 89.3% for keratitis,other corneal abnormalities,and normal corneas,respectively.Compared to DenseNet121,Swin transformer,and InceptionResNet,ADK_FVQSAM improves average accuracy by 3.1%,11.3%,and 15.3%,respectively.These results demonstrate that ADK_FVQSAM significantly enhances the recognition performance of keratitis based on LQ slit-lamp images,offering a practical approach for clinical application.展开更多
In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is signif...In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.展开更多
A mean-match correlation vector quantizer (MMCVQ) was presented for fast image encoding. In this algorithm, a sorted codebook is generated regarding the mean values of all codewords. During the encoding stage, high co...A mean-match correlation vector quantizer (MMCVQ) was presented for fast image encoding. In this algorithm, a sorted codebook is generated regarding the mean values of all codewords. During the encoding stage, high correlation of the adjacent image blocks is utilized, and a searching range is obtained in the sorted codebook according to the mean value of the current processing vector. In order to gain good performance, proper THd and NS are predefined on the basis of experimental experiences and additional distortion limitation. The expermental results show that the MMCVQ algorithm is much faster than the full-search VQ algorithm, and the encoding quality degradation of the proposed algorithm is only 0.3~0.4 dB compared to the full-search VQ.展开更多
This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures t...This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.展开更多
Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. ...Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. has enhanced the distinguishing rate and scanning rate of the imaging equipments. The diagnosis and the process of getting useful information from the image are got by processing the medical images using the wavelet technique. Wavelet transform has increased the compression rate. Increasing the compression performance by minimizing the amount of image data in the medical images is a critical task. Crucial medical information like diagnosing diseases and their treatments is obtained by modern radiology techniques. Medical Imaging (MI) process is used to acquire that information. For lossy and lossless image compression, several techniques were developed. Image edges have limitations in capturing them if we make use of the extension of 1-D wavelet transform. This is because wavelet transform cannot effectively transform straight line discontinuities, as well geographic lines in natural images cannot be reconstructed in a proper manner if 1-D transform is used. Differently oriented image textures are coded well using Curvelet Transform. The Curvelet Transform is suitable for compressing medical images, which has more curvy portions. This paper describes a method for compression of various medical images using Fast Discrete Curvelet Transform based on wrapping technique. After transformation, the coefficients are quantized using vector quantization and coded using arithmetic encoding technique. The proposed method is tested on various medical images and the result demonstrates significant improvement in performance parameters like Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR).展开更多
A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. ...A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.展开更多
Digital watermarking has been presented as a new method for copyright protection by embedding a secret signal in a digital image or video sequence. Common digital image watermarking techniques are based on the concept...Digital watermarking has been presented as a new method for copyright protection by embedding a secret signal in a digital image or video sequence. Common digital image watermarking techniques are based on the concept of spread spectrum communications, which can be classified in two catalogues: spatial domain and transform domain based. Most of transform domain watermarking methods are based on discrete cosine transforms (DCT) and robust to JPEG lossy compression. Recently, digital image watermarking based on another important lossy compression technique, vector quantization (VQ), has been presented, which carries watermark information by codeword indices. It is secret and efficient, and is robust to VQ compression with the same codebook. However, the embedded information is less and the extraction process requires the original image. This paper presents a more efficient VQ based image watermarking method, which can embed a large gray level watermark into the original image with less extra distortion and perform the watermark extraction without the original image. In addition, the proposed watermarking algorithm is very secret because two keys are required for watermark extraction. Experimental results demonstrate the effectiveness of the proposed technique.展开更多
In the field of image and data compression, there are always new approaches being tried and tested to improve the quality of the reconstructed image and to reduce the computational complexity of the algorithm employed...In the field of image and data compression, there are always new approaches being tried and tested to improve the quality of the reconstructed image and to reduce the computational complexity of the algorithm employed. However, there is no one perfect technique that can offer both maximum compression possible and best reconstruction quality, for any type of image. Depending on the level of compression desired and characteristics of the input image, a suitable choice must be made from the options available. For example in the field of video compression, the integer adaptation of discrete cosine transform (DCT) with fixed quantization is widely used in view of its ease of computation and adequate performance. There exist transforms like, discrete Tchebichef transform (DTT), which are suitable too, but are potentially unexploited. This work aims to bridge this gap and examine cases where DTT could be an alternative compression transform to DCT based on various image quality parameters. A multiplier-free fast implementation of integer DTT (ITT) of size 8 × 8 is also studied, for its low computational complexity. Due to the uneven spread of data across images, some areas might have intricate detail, whereas others might be rather plain. This prompts the use of a compression method that can be adapted according to the amount of detail. So, instead of fixed quantization this paper employs quantization that varies depending on the characteristics of the image block. This implementation is free from additional computational or transmission overhead. The image compression performance of ITT and ICT, using both variable and fixed quantization, is compared with a variety of images and the cases suitable for ITT-based image compression employing variable quantization are identified.展开更多
This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded...This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.展开更多
Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Seco...Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.展开更多
A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can...A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can simplify the realization of the non-linear interpolated vector quantization (NLIVQ) technique and make the partial distance search (PDS) algorithm more efficient. Utilizing the relationship of vector L2-norm and its Euclidean distance, some conditions of eliminating unnecessary codewords are obtained. Further, using inequality constructed by the subvector L2-norm, more unnecessary codewords are eliminated. During the search process for code, mostly unlikely codewords can be rejected by the proposed algorithm combined with the non-linear interpolated vector quantization technique and the partial distance search technique. The experimental results show that the reduction of computation is outstanding in the encoding time and complexity against the full search method.展开更多
In this letter, a new Linde-Buzo-Gray (LBG)-based image compression method using Discrete Cosine Transform (DCT) and Vector Quantization (VQ) is proposed. A gray-level image is firstly decomposed into blocks, then eac...In this letter, a new Linde-Buzo-Gray (LBG)-based image compression method using Discrete Cosine Transform (DCT) and Vector Quantization (VQ) is proposed. A gray-level image is firstly decomposed into blocks, then each block is subsequently encoded by a 2D DCT coding scheme. The dimension of vectors as the input of a generalized VQ scheme is reduced. The time of encoding by a generalized VQ is reduced with the introduction of DCT process. The experimental results demonstrate the efficiency of the proposed method.展开更多
To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented...To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented. This scheme is region-based and mainly focuses on two issues. Firstly, an appropriate segmentation algorithm is developed to partition an image into some irregular regions and tidy contours, where the crucial regions corresponding to objects are retained and a lot of tiny parts are eliminated. The irregular regions and contours are coded using different methods respectively in the next step. The other issue is the coding method of contours where an efficient and novel chain code is employed. This scheme tries to find a compromise between the quality of reconstructed images and the compression ratio. Some principles and experiments are conducted and the results show its higher performance compared with other compression technologies, in terms of higher quality of reconstructed images, higher compression ratio and less time consuming.展开更多
In this paper, the authors propose a new approach to image compression based on the principle of Set Partitioning in Hierarchical Tree algorithm (SPIHT). Our approach, the modified SPIHT (MSPIHT), distributes entr...In this paper, the authors propose a new approach to image compression based on the principle of Set Partitioning in Hierarchical Tree algorithm (SPIHT). Our approach, the modified SPIHT (MSPIHT), distributes entropy differently than SPIHT and also optimizes the coding. This approach can produce results that are a significant improvement on the Peak Signal-to-Noise Ratio (PSNR) and compression ratio obtained by SPIHT algorithm, without affecting the computing time. These results are also comparable with those obtained using the Embedded Zerotree Wavelet (EZW) and Joint Photographic Experts Group 2000 (JPG2) algorithms.展开更多
Discusses block truncation coding (BTC) a simple and fast image compression technique suitable for real time image transmission with high channel error resisting capability and good reconstructed image quality, and it...Discusses block truncation coding (BTC) a simple and fast image compression technique suitable for real time image transmission with high channel error resisting capability and good reconstructed image quality, and its main drawback of high bit rate of 2 bits/pixel for a 256 gray image for the purpose of reducing the bit rate, and introduces a simple look up table method for coding the higher mean and the lower mean of a block, and a set of 24 visual patterns used to encode 4×4 bit plane of the high detail block and proposes a new algorithm, when needs only 19 bits to encode 4×4 high detail block and 12 bits to encode the 4×4 low detail block.展开更多
Block truncation coding (BTC) is a simple and fast image compression technique suitable for real time image transmission, and it has high channel error resisting capability and good reconstructed image quality. The ma...Block truncation coding (BTC) is a simple and fast image compression technique suitable for real time image transmission, and it has high channel error resisting capability and good reconstructed image quality. The main shortcoming of the original BTC algorithm is the high bit rate (normally 2 bits/pixel). In order to reduce the bit rate, an efficient BTC image compression algorithm was presented in this paper. In the proposed algorithm, a simple look up table method is presented for coding the higher mean and the lower mean of a block without any extra distortion, and a prediction technique is introduced to reduce the number of bits used to code the bit plane with some extra distortion. The test results prove the effectiveness of the proposed algorithm.展开更多
This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is i...This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is inferior only to the best transform KLT. Its vector quantization can maintain the minimum quantization distortions and greatly increase the compression ratio. In order to improve compression efficiency, an adaptive strategy of selecting reserved region patterns is applied to preserving the high energy at the same compression ratio. The experiment results show that they are satisfactory at the compression ration ratio if greater than 20.展开更多
Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Cons...Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in mukispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian-Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band.展开更多
文摘First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantization technique for image data compression.Thecombination cuts down the dimensions of vector coding.The size of the codebook can reasonablybe reduced.This method can reduce the computation complexity and pick up the vector codingprocess.Experiments using image processing system show that this method is very effective inthe field of image data compression.
文摘Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability of success near 100% has been proposed, that performs operations 45√N times approximately. In this paper, a hybrid quantum VQ encoding algorithm between the classical method and the quantum algorithm is presented. The number of its operations is less than √N for most images, and it is more efficient than the pure quantum algorithm.
基金supported by the National Natural Science Foundation of China(Nos.62276210,82201148 and 62376215)the Key Research and Development Project of Shaanxi Province(No.2025CY-YBXM-044)+3 种基金the Natural Science Foundation of Zhejiang Province(No.LQ22H120002)the Medical Health Science and Technology Project of Zhejiang Province(Nos.2022RC069 and 2023KY1140)the Natural Science Foundation of Ningbo(No.2023J390)the Ningbo Top Medical and Health Research Program(No.2023030716).
文摘This paper proposes a novel method for the automatic diagnosis of keratitis using feature vector quantization and self-attention mechanisms(ADK_FVQSAM).First,high-level features are extracted using the DenseNet121 backbone network,followed by adaptive average pooling to scale the features to a fixed length.Subsequently,product quantization with residuals(PQR)is applied to convert continuous feature vectors into discrete features representations,preserving essential information insensitive to image quality variations.The quantized and original features are concatenated and fed into a self-attention mechanism to capture keratitis-related features.Finally,these enhanced features are classified through a fully connected layer.Experiments on clinical low-quality(LQ)images show that ADK_FVQSAM achieves accuracies of 87.7%,81.9%,and 89.3% for keratitis,other corneal abnormalities,and normal corneas,respectively.Compared to DenseNet121,Swin transformer,and InceptionResNet,ADK_FVQSAM improves average accuracy by 3.1%,11.3%,and 15.3%,respectively.These results demonstrate that ADK_FVQSAM significantly enhances the recognition performance of keratitis based on LQ slit-lamp images,offering a practical approach for clinical application.
文摘In this paper a novel coding method based on fuzzy vector quantization for noised image with Gaussian white-noise pollution is presented. By restraining the high frequency subbands of wavelet image the noise is significantly removed and coded with fuzzy vector quantization. The experimental result shows that the method can not only achieve high compression ratio but also remove noise dramatically.
文摘A mean-match correlation vector quantizer (MMCVQ) was presented for fast image encoding. In this algorithm, a sorted codebook is generated regarding the mean values of all codewords. During the encoding stage, high correlation of the adjacent image blocks is utilized, and a searching range is obtained in the sorted codebook according to the mean value of the current processing vector. In order to gain good performance, proper THd and NS are predefined on the basis of experimental experiences and additional distortion limitation. The expermental results show that the MMCVQ algorithm is much faster than the full-search VQ algorithm, and the encoding quality degradation of the proposed algorithm is only 0.3~0.4 dB compared to the full-search VQ.
基金supported by the National Natural Science Foundation of China (Grant Nos. 60573172 and 60973152)the Superior University Doctor Subject Special Scientific Research Foundation of China (Grant No. 20070141014)the Natural Science Foundation of Liaoning Province of China (Grant No. 20082165)
文摘This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.
文摘Due to the development of CT (Computed Tomography), MRI (Magnetic Resonance Imaging), PET (Positron Emission Tomography), EBCT (Electron Beam Computed Tomography), SMRI (Stereotactic Magnetic Resonance Imaging), etc. has enhanced the distinguishing rate and scanning rate of the imaging equipments. The diagnosis and the process of getting useful information from the image are got by processing the medical images using the wavelet technique. Wavelet transform has increased the compression rate. Increasing the compression performance by minimizing the amount of image data in the medical images is a critical task. Crucial medical information like diagnosing diseases and their treatments is obtained by modern radiology techniques. Medical Imaging (MI) process is used to acquire that information. For lossy and lossless image compression, several techniques were developed. Image edges have limitations in capturing them if we make use of the extension of 1-D wavelet transform. This is because wavelet transform cannot effectively transform straight line discontinuities, as well geographic lines in natural images cannot be reconstructed in a proper manner if 1-D transform is used. Differently oriented image textures are coded well using Curvelet Transform. The Curvelet Transform is suitable for compressing medical images, which has more curvy portions. This paper describes a method for compression of various medical images using Fast Discrete Curvelet Transform based on wrapping technique. After transformation, the coefficients are quantized using vector quantization and coded using arithmetic encoding technique. The proposed method is tested on various medical images and the result demonstrates significant improvement in performance parameters like Peak Signal to Noise Ratio (PSNR) and Compression Ratio (CR).
基金Project supported by the Research Grants Council of the Hong Kong Special Administrative Region,China(Grant No.CityU123009)
文摘A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.
文摘Digital watermarking has been presented as a new method for copyright protection by embedding a secret signal in a digital image or video sequence. Common digital image watermarking techniques are based on the concept of spread spectrum communications, which can be classified in two catalogues: spatial domain and transform domain based. Most of transform domain watermarking methods are based on discrete cosine transforms (DCT) and robust to JPEG lossy compression. Recently, digital image watermarking based on another important lossy compression technique, vector quantization (VQ), has been presented, which carries watermark information by codeword indices. It is secret and efficient, and is robust to VQ compression with the same codebook. However, the embedded information is less and the extraction process requires the original image. This paper presents a more efficient VQ based image watermarking method, which can embed a large gray level watermark into the original image with less extra distortion and perform the watermark extraction without the original image. In addition, the proposed watermarking algorithm is very secret because two keys are required for watermark extraction. Experimental results demonstrate the effectiveness of the proposed technique.
文摘In the field of image and data compression, there are always new approaches being tried and tested to improve the quality of the reconstructed image and to reduce the computational complexity of the algorithm employed. However, there is no one perfect technique that can offer both maximum compression possible and best reconstruction quality, for any type of image. Depending on the level of compression desired and characteristics of the input image, a suitable choice must be made from the options available. For example in the field of video compression, the integer adaptation of discrete cosine transform (DCT) with fixed quantization is widely used in view of its ease of computation and adequate performance. There exist transforms like, discrete Tchebichef transform (DTT), which are suitable too, but are potentially unexploited. This work aims to bridge this gap and examine cases where DTT could be an alternative compression transform to DCT based on various image quality parameters. A multiplier-free fast implementation of integer DTT (ITT) of size 8 × 8 is also studied, for its low computational complexity. Due to the uneven spread of data across images, some areas might have intricate detail, whereas others might be rather plain. This prompts the use of a compression method that can be adapted according to the amount of detail. So, instead of fixed quantization this paper employs quantization that varies depending on the characteristics of the image block. This implementation is free from additional computational or transmission overhead. The image compression performance of ITT and ICT, using both variable and fixed quantization, is compared with a variety of images and the cases suitable for ITT-based image compression employing variable quantization are identified.
文摘This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.
文摘Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.
基金the National Natural Science Foundation of China (60602057)the NaturalScience Foundation of Chongqing Science and Technology Commission (2006BB2373).
文摘A fast encoding algorithm based on the mean square error (MSE) distortion for vector quantization is introduced. The vector, which is effectively constructed with wavelet transform (WT) coefficients of images, can simplify the realization of the non-linear interpolated vector quantization (NLIVQ) technique and make the partial distance search (PDS) algorithm more efficient. Utilizing the relationship of vector L2-norm and its Euclidean distance, some conditions of eliminating unnecessary codewords are obtained. Further, using inequality constructed by the subvector L2-norm, more unnecessary codewords are eliminated. During the search process for code, mostly unlikely codewords can be rejected by the proposed algorithm combined with the non-linear interpolated vector quantization technique and the partial distance search technique. The experimental results show that the reduction of computation is outstanding in the encoding time and complexity against the full search method.
基金Partially supported by the National Natural Science Foundation of China (No.60572100), Foundation of State Key Laboratory of Networking and Switching Technology (China) and Science Foundation of Shenzhen City (200408).
文摘In this letter, a new Linde-Buzo-Gray (LBG)-based image compression method using Discrete Cosine Transform (DCT) and Vector Quantization (VQ) is proposed. A gray-level image is firstly decomposed into blocks, then each block is subsequently encoded by a 2D DCT coding scheme. The dimension of vectors as the input of a generalized VQ scheme is reduced. The time of encoding by a generalized VQ is reduced with the introduction of DCT process. The experimental results demonstrate the efficiency of the proposed method.
基金supported by the National Science Foundation of China(60872109)the Program for New Century Excellent Talents in University(NCET-06-0900)
文摘To get the high compression ratio as well as the high-quality reconstructed image, an effective image compression scheme named irregular segmentation region coding based on spiking cortical model(ISRCS) is presented. This scheme is region-based and mainly focuses on two issues. Firstly, an appropriate segmentation algorithm is developed to partition an image into some irregular regions and tidy contours, where the crucial regions corresponding to objects are retained and a lot of tiny parts are eliminated. The irregular regions and contours are coded using different methods respectively in the next step. The other issue is the coding method of contours where an efficient and novel chain code is employed. This scheme tries to find a compromise between the quality of reconstructed images and the compression ratio. Some principles and experiments are conducted and the results show its higher performance compared with other compression technologies, in terms of higher quality of reconstructed images, higher compression ratio and less time consuming.
文摘In this paper, the authors propose a new approach to image compression based on the principle of Set Partitioning in Hierarchical Tree algorithm (SPIHT). Our approach, the modified SPIHT (MSPIHT), distributes entropy differently than SPIHT and also optimizes the coding. This approach can produce results that are a significant improvement on the Peak Signal-to-Noise Ratio (PSNR) and compression ratio obtained by SPIHT algorithm, without affecting the computing time. These results are also comparable with those obtained using the Embedded Zerotree Wavelet (EZW) and Joint Photographic Experts Group 2000 (JPG2) algorithms.
文摘Discusses block truncation coding (BTC) a simple and fast image compression technique suitable for real time image transmission with high channel error resisting capability and good reconstructed image quality, and its main drawback of high bit rate of 2 bits/pixel for a 256 gray image for the purpose of reducing the bit rate, and introduces a simple look up table method for coding the higher mean and the lower mean of a block, and a set of 24 visual patterns used to encode 4×4 bit plane of the high detail block and proposes a new algorithm, when needs only 19 bits to encode 4×4 high detail block and 12 bits to encode the 4×4 low detail block.
文摘Block truncation coding (BTC) is a simple and fast image compression technique suitable for real time image transmission, and it has high channel error resisting capability and good reconstructed image quality. The main shortcoming of the original BTC algorithm is the high bit rate (normally 2 bits/pixel). In order to reduce the bit rate, an efficient BTC image compression algorithm was presented in this paper. In the proposed algorithm, a simple look up table method is presented for coding the higher mean and the lower mean of a block without any extra distortion, and a prediction technique is introduced to reduce the number of bits used to code the bit plane with some extra distortion. The test results prove the effectiveness of the proposed algorithm.
文摘This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is inferior only to the best transform KLT. Its vector quantization can maintain the minimum quantization distortions and greatly increase the compression ratio. In order to improve compression efficiency, an adaptive strategy of selecting reserved region patterns is applied to preserving the high energy at the same compression ratio. The experiment results show that they are satisfactory at the compression ration ratio if greater than 20.
基金supported by the National High Technology Research and Development Program of China (Grant No. 863-2-5-1-13B)
文摘Multispectral time delay and integration charge coupled device (TDICCD) image compression requires a low- complexity encoder because it is usually completed on board where the energy and memory are limited. The Consultative Committee for Space Data Systems (CCSDS) has proposed an image data compression (CCSDS-IDC) algorithm which is so far most widely implemented in hardware. However, it cannot reduce spectral redundancy in mukispectral images. In this paper, we propose a low-complexity improved CCSDS-IDC (ICCSDS-IDC)-based distributed source coding (DSC) scheme for multispectral TDICCD image consisting of a few bands. Our scheme is based on an ICCSDS-IDC approach that uses a bit plane extractor to parse the differences in the original image and its wavelet transformed coefficient. The output of bit plane extractor will be encoded by a first order entropy coder. Low-density parity-check-based Slepian-Wolf (SW) coder is adopted to implement the DSC strategy. Experimental results on space multispectral TDICCD images show that the proposed scheme significantly outperforms the CCSDS-IDC-based coder in each band.