Detecting the forgery parts from a double compressed image is very important and urgent work for blind authentication. A very simple and efficient method for accomplishing the task is proposed. Firstly, the probabilis...Detecting the forgery parts from a double compressed image is very important and urgent work for blind authentication. A very simple and efficient method for accomplishing the task is proposed. Firstly, the probabilistic model with periodic effects in double quantization is analyzed, and the probability of quantized DCT coefficients in each block is calculated over the entire iraage. Secondly, the posteriori probability of each block is computed according to Bayesian theory and the results mentioned in first part. Then the mean and variance of the posteriori probability are to be used for judging whether the target block is tampered. Finally, the mathematical morphology operations are performed to reduce the false alarm probability. Experimental results show that the method can exactly locate the doctored part, and through the experiment it is also found that for detecting the tampered regions, the higher the second compression quality is, the more exact the detection efficiency is.展开更多
Copy-paste forgery is a very common type of forgery in JPEG images.The tampered patch has always suffered from JPEG compression twice with inconsistent block segmentation.This phenomenon in JPEG image forgeries is cal...Copy-paste forgery is a very common type of forgery in JPEG images.The tampered patch has always suffered from JPEG compression twice with inconsistent block segmentation.This phenomenon in JPEG image forgeries is called the shifted double JPEG(SDJPEG) compression.Detection of SDJPEG compressed image patches can make crucial contribution to detect and locate the tampered region.However,the existing SDJPEG compression tampering detection methods cannot achieve satisfactory results especially when the tampered region is small.In this paper,an effective SDJPEG compression tampering detection method utilizing both intra-block and inter-block correlations is proposed.Statistical artifacts are left by the SDJPEG compression among the magnitudes of JPEG quantized discrete cosine transform(DCT) coefficients.Firstly,difference 2D arrays,which describe the differences between the magnitudes of neighboring JPEG quantized DCT coefficients on the intrablock and inter-block,are used to enhance the SDJPEG compression artifacts.Then,the thresholding technique is used to deal with these difference 2D arrays for reducing computational cost.After that,co-occurrence matrix is used to model these difference 2D arrays so as to take advantage of second-order statistics.All elements of these co-occurrence matrices are served as features for SDJPEG compression tampering detection.Finally,support vector machine(SVM) classifier is employed to distinguish the SDJPEG compressed image patches from the single JPEG compressed image patches using the developed feature set.Experimental results demonstrate the efficiency of the proposed method.展开更多
Information hiding in Joint Photographic Experts Group (JPEG) compressed images are investigated in this paper. Quantization is the source of information loss in JPEG compression process. Therefore, information hidd...Information hiding in Joint Photographic Experts Group (JPEG) compressed images are investigated in this paper. Quantization is the source of information loss in JPEG compression process. Therefore, information hidden in images is probably destroyed by JPEG compression. This paper presents an algorithm to reliably embed information into the JPEG bit streams in the process of JPEG encoding. Information extraction is performed in the process of JPEG decoding. The basic idea of our algorithm is to modify the quantized direct current (DC) coefficients and non zero alternating currenl (AC) coefficients to represent one bit information (0 or 1 ). Experimental results on gray images using baseline sequential JPEG encoding show that the cover images (images without scoret information) and the stego-images (images with secret information) are perceptually indiscernible.展开更多
Robust data hiding techniques attempt to construct covert communication in a lossy public channel.Nowadays,the existing robust JPEG steganographic algorithms cannot overcome the side-information missing situation.Thus...Robust data hiding techniques attempt to construct covert communication in a lossy public channel.Nowadays,the existing robust JPEG steganographic algorithms cannot overcome the side-information missing situation.Thus,this paper proposes a new robust JPEG steganographic algorithm based on the high tense region location method which needs no side-information of lossy channel.First,a tense region locating method is proposed based on the Harris-Laplacian feature point.Then,robust cover object generating processes are described.Last,the advanced embedding cost function is proposed.A series of experiments are conducted on various JPEG image sets and the results show that the proposed steganographic algorithm can resist JPEG compression efficiently with acceptable performance against steganalysis statistical detection libraries GFR(Gabor Filters Rich model)and DCTR(Discrete Cosine Transform Residual).展开更多
This paper presents an investigation on the effect of JPEG compression on the similarity between the target image and the background,where the similarity is further used to determine the degree of clutter in the image...This paper presents an investigation on the effect of JPEG compression on the similarity between the target image and the background,where the similarity is further used to determine the degree of clutter in the image.Four new clutter metrics based on image quality assessment are introduced,among which the Haar wavelet-based perceptual similarity index,known as HaarPSI,provides the best target acquisition prediction results.It is shown that the similarity between the target and the background at the boundary between visually lossless and visually lossy compression does not change significantly compared to the case when an uncompressed image is used.In future work,through subjective tests,it is necessary to check whether this presence of compression at the threshold of just noticeable differences will affect the human target acquisition performance.Similarity values are compared with the results of subjective tests of the well-known target Search_2 database,where the degree of agreement between objective and subjective scores,measured through linear correlation,reached a value of 90%.展开更多
JPEG(Joint Image Experts Group)is currently the most widely used image format on the Internet.Existing cases show that many tampering operations occur on JPEG images.The basic process of the operation is that the JPEG...JPEG(Joint Image Experts Group)is currently the most widely used image format on the Internet.Existing cases show that many tampering operations occur on JPEG images.The basic process of the operation is that the JPEG file is first decompressed,modified in the null field,and then the tampered image is compressed and saved in JPEG format,so that the tampered image may be compressed several times.Therefore,the double compression detection of JPEG images can be an important part for determining whether an image has been tampered with,and the study of double JPEG compression anti-detection can further advance the progress of detection work.In this paper,we mainly review the literature in the field of double JPEG compression detection in recent years with two aspects,namely,the quantization table remains unchanged and the quantization table is inconsistent in the double JPEG compression process,Also,we will introduce some representative methods of double JPEG anti-detection in recent years.Finally,we analyze the problems existing in the field of double JPEG compression and give an outlook on the future development direction.展开更多
In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the imag...In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the image to its original quality.The challenge lies in regenerating significantly compressed images into a state in which these become identifiable.Therefore,this study focuses on the restoration of JPEG images subjected to substantial degradation caused by maximum lossy compression using Generative Adversarial Networks(GAN).The generator in this network is based on theU-Net architecture.It features a newhourglass structure that preserves the characteristics of the deep layers.In addition,the network incorporates two loss functions to generate natural and high-quality images:Low Frequency(LF)loss and High Frequency(HF)loss.HF loss uses a pretrained VGG-16 network and is configured using a specific layer that best represents features.This can enhance the performance in the high-frequency region.In contrast,LF loss is used to handle the low-frequency region.The two loss functions facilitate the generation of images by the generator,which can mislead the discriminator while accurately generating high-and low-frequency regions.Consequently,by removing the blocking effects frommaximum lossy compressed images,images inwhich identities could be recognized are generated.This study represents a significant improvement over previous research in terms of the image resolution performance.展开更多
Detecting double Joint Photographic Experts Group (JPEG) compressionfor color images is vital in the field of image forensics. In previousresearches, there have been various approaches to detecting double JPEGcompress...Detecting double Joint Photographic Experts Group (JPEG) compressionfor color images is vital in the field of image forensics. In previousresearches, there have been various approaches to detecting double JPEGcompression with different quantization matrices. However, the detectionof double JPEG color images with the same quantization matrix is stilla challenging task. An effective detection approach to extract features isproposed in this paper by combining traditional analysis with ConvolutionalNeural Networks (CNN). On the one hand, the number of nonzero pixels andthe sum of pixel values of color space conversion error are provided with 12-dimensional features through experiments. On the other hand, the roundingerror, the truncation error and the quantization coefficient matrix are used togenerate a total of 128-dimensional features via a specially designed CNN. Insuch aCNN, convolutional layers with fixed kernel of 1×1 and Dropout layersare adopted to prevent overfitting of the model, and an average pooling layeris used to extract local characteristics. In this approach, the Support VectorMachine (SVM) classifier is applied to distinguishwhether a given color imageis primarily or secondarily compressed. The approach is also suitable for thecase when customized needs are considered. The experimental results showthat the proposed approach is more effective than some existing ones whenthe compression quality factors are low.展开更多
A block difference compression algorithm based on block PSNR, Which is one of the parameters of image quality is presented for image sequence processing. This algorithm adopts classical JPEG method in intra-frame codi...A block difference compression algorithm based on block PSNR, Which is one of the parameters of image quality is presented for image sequence processing. This algorithm adopts classical JPEG method in intra-frame coding, and processes 8×8 blocks of inter-frame with different methods depending on the results from the current PSNR compared with a threshold. The calculation of threshold and the situation of error accumulation based on different thresholds as well as the structure of code stream are also presented. The advantage of this algorithm is the reduction of the large operation volume during inter-frame processing.展开更多
The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of ...The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of compression. We present a comparative study of four approaches to compressing multispectral Mastcam images. The first approach is to divide the nine bands into three groups with each group having three bands. Since the multispectral bands have strong correlation, we treat the three groups of images as video frames. We call this approach the Video approach. The second approach is to compress each group separately and we call it the split band (SB) approach. The third one is to apply a two-step approach in which the first step uses principal component analysis (PCA) to compress a nine-band image cube to six bands and a second step compresses the six PCA bands using conventional codecs. The fourth one is to apply PCA only. In addition, we also present subjective and objective assessment results for compressing RGB images because RGB images have been used for stereo and disparity map generation. Five well-known compression codecs, including JPEG, JPEG-2000 (J2K), X264, X265, and Daala in the literature, have been applied and compared in each approach. The performance of different algorithms was assessed using four well-known performance metrics. Two are conventional and another two are known to have good correlation with human perception. Extensive experiments using actual Mastcam images have been performed to demonstrate the various approaches. We observed that perceptually lossless compression can be achieved at 10:1 compression ratio. In particular, the performance gain of the SB approach with Daala is at least 5 dBs in terms peak signal-to-noise ratio (PSNR) at 10:1 compression ratio over that of JPEG. Subjective comparisons also corroborated with the objective metrics in that perceptually lossless compression can be achieved even at 20 to 1 compression.展开更多
A novel mathematical morphological approach for region of interest(ROI) automatic determination and JPEG2000-based coding of microscopy image compression is presented.The algorithm is very fast and requires lower comp...A novel mathematical morphological approach for region of interest(ROI) automatic determination and JPEG2000-based coding of microscopy image compression is presented.The algorithm is very fast and requires lower computing power,which is particularly suitable for some irregular region-based cell microscopy images with poor qualities.Firstly,an active threshold-based method is discussed to create a rough mask of regions of interest(cells).And then some morphological operations are designed and applied to achieve the segmentation of cells.In addition,an extra morphological operation,dilation,is applied to create the final mask with some redundancies to avoid the"edge effect"after removing false cells.Finally,ROI and region of background(ROB) are obtained and encoded individually in different compression ratio flexibly based on the JPEG2000,which can adjust the quality between ROI and ROB without coding for ROI shape.The experimental results certify the effectiveness of the proposed algorithm,and compared with JPEG2000,the proposed algorithm has better performance in both subjective quality and objective quality at the same compression ratios.展开更多
In this paper image quality of two types of compression methods, wavelet based and seam carving based are investigated. A metric is introduced to compare the image quality under wavelet and seam carving schemes. Meyer...In this paper image quality of two types of compression methods, wavelet based and seam carving based are investigated. A metric is introduced to compare the image quality under wavelet and seam carving schemes. Meyer, Coiflet 2 and Jpeg2000 wavelet based methods are used as the wavelet based methods. Hausdorf distance based metric (HDM) is proposed and used for the comparison of the two compression methods instead of model based matching techniques and correspondence-based matching techniques, because there is no pairing of points in the two sets being compared. In addition entropy based metric (EM) or peak signal to noise ration based metric (PSNRM) cannot be used to compare the two schemes as the seam carving tends to deform the objects. The wavelet compressed images with different compression percentages were analyzed with HDM and EM and it was observed that HDM follows the EM/PSNRM for wavelet based compression methods. Then HDM is used to compare the wavelet and seam carved images for different compression percentages. The initial results showed that HDM is better metric for comparing wavelet based and seam carved images.展开更多
In this paper, we introduce a novel approach to compress jointly a medical image and a multichannel bio-signals (e.g. ECG, EEG). This technique is based on the idea of Multimodal Compression (MC) which requires only o...In this paper, we introduce a novel approach to compress jointly a medical image and a multichannel bio-signals (e.g. ECG, EEG). This technique is based on the idea of Multimodal Compression (MC) which requires only one codec instead of multiple codecs. Objectively, biosignal samples are merged in the spatial domain of the image using a specific mixing function. Afterwards, the whole mixture is compressed using JPEG 2000. The spatial mixing function inserts samples in low-frequency regions, defined using a set of operations, including down-sampling, interpolation, and quad-tree decomposition. The decoding is achieved by inverting the process using a separation function. Results show that this technique allows better performances in terms of Compression Ratio (CR) compared to approaches which encode separately modalities. The reconstruction quality is evaluated on a set of test data using the PSNR (Peak Signal Noise Ratio) and the PRD (Percent Root Mean Square Difference), respectively for the image and biosignals.展开更多
Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high perf...Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.展开更多
基金supported by the National Natural Science Foundation of China(60574082)the Postdoctoral Science Foundation of China(20070421017)+2 种基金the Natural Science Foundation of Jiangsu Province(BK 2008403)the Graduate Research and Innovation Project of Jiangsu Province(CX09B-100Z)the Excellent Doctoral Dissertation Innovation Foundation of Nanjing University of Science and Technology.
文摘Detecting the forgery parts from a double compressed image is very important and urgent work for blind authentication. A very simple and efficient method for accomplishing the task is proposed. Firstly, the probabilistic model with periodic effects in double quantization is analyzed, and the probability of quantized DCT coefficients in each block is calculated over the entire iraage. Secondly, the posteriori probability of each block is computed according to Bayesian theory and the results mentioned in first part. Then the mean and variance of the posteriori probability are to be used for judging whether the target block is tampered. Finally, the mathematical morphology operations are performed to reduce the false alarm probability. Experimental results show that the method can exactly locate the doctored part, and through the experiment it is also found that for detecting the tampered regions, the higher the second compression quality is, the more exact the detection efficiency is.
基金the National Natural Science Foundation of China(Nos.61071152 and 61271316)the National Basic Research Program (973) of China (Nos.2010CB731403 and 2010CB731406)the National "Twelfth Five-Year" Plan for Science and Technology Support(No.2012BAH38 B04)
文摘Copy-paste forgery is a very common type of forgery in JPEG images.The tampered patch has always suffered from JPEG compression twice with inconsistent block segmentation.This phenomenon in JPEG image forgeries is called the shifted double JPEG(SDJPEG) compression.Detection of SDJPEG compressed image patches can make crucial contribution to detect and locate the tampered region.However,the existing SDJPEG compression tampering detection methods cannot achieve satisfactory results especially when the tampered region is small.In this paper,an effective SDJPEG compression tampering detection method utilizing both intra-block and inter-block correlations is proposed.Statistical artifacts are left by the SDJPEG compression among the magnitudes of JPEG quantized discrete cosine transform(DCT) coefficients.Firstly,difference 2D arrays,which describe the differences between the magnitudes of neighboring JPEG quantized DCT coefficients on the intrablock and inter-block,are used to enhance the SDJPEG compression artifacts.Then,the thresholding technique is used to deal with these difference 2D arrays for reducing computational cost.After that,co-occurrence matrix is used to model these difference 2D arrays so as to take advantage of second-order statistics.All elements of these co-occurrence matrices are served as features for SDJPEG compression tampering detection.Finally,support vector machine(SVM) classifier is employed to distinguish the SDJPEG compressed image patches from the single JPEG compressed image patches using the developed feature set.Experimental results demonstrate the efficiency of the proposed method.
文摘Information hiding in Joint Photographic Experts Group (JPEG) compressed images are investigated in this paper. Quantization is the source of information loss in JPEG compression process. Therefore, information hidden in images is probably destroyed by JPEG compression. This paper presents an algorithm to reliably embed information into the JPEG bit streams in the process of JPEG encoding. Information extraction is performed in the process of JPEG decoding. The basic idea of our algorithm is to modify the quantized direct current (DC) coefficients and non zero alternating currenl (AC) coefficients to represent one bit information (0 or 1 ). Experimental results on gray images using baseline sequential JPEG encoding show that the cover images (images without scoret information) and the stego-images (images with secret information) are perceptually indiscernible.
文摘Robust data hiding techniques attempt to construct covert communication in a lossy public channel.Nowadays,the existing robust JPEG steganographic algorithms cannot overcome the side-information missing situation.Thus,this paper proposes a new robust JPEG steganographic algorithm based on the high tense region location method which needs no side-information of lossy channel.First,a tense region locating method is proposed based on the Harris-Laplacian feature point.Then,robust cover object generating processes are described.Last,the advanced embedding cost function is proposed.A series of experiments are conducted on various JPEG image sets and the results show that the proposed steganographic algorithm can resist JPEG compression efficiently with acceptable performance against steganalysis statistical detection libraries GFR(Gabor Filters Rich model)and DCTR(Discrete Cosine Transform Residual).
文摘This paper presents an investigation on the effect of JPEG compression on the similarity between the target image and the background,where the similarity is further used to determine the degree of clutter in the image.Four new clutter metrics based on image quality assessment are introduced,among which the Haar wavelet-based perceptual similarity index,known as HaarPSI,provides the best target acquisition prediction results.It is shown that the similarity between the target and the background at the boundary between visually lossless and visually lossy compression does not change significantly compared to the case when an uncompressed image is used.In future work,through subjective tests,it is necessary to check whether this presence of compression at the threshold of just noticeable differences will affect the human target acquisition performance.Similarity values are compared with the results of subjective tests of the well-known target Search_2 database,where the degree of agreement between objective and subjective scores,measured through linear correlation,reached a value of 90%.
文摘JPEG(Joint Image Experts Group)is currently the most widely used image format on the Internet.Existing cases show that many tampering operations occur on JPEG images.The basic process of the operation is that the JPEG file is first decompressed,modified in the null field,and then the tampered image is compressed and saved in JPEG format,so that the tampered image may be compressed several times.Therefore,the double compression detection of JPEG images can be an important part for determining whether an image has been tampered with,and the study of double JPEG compression anti-detection can further advance the progress of detection work.In this paper,we mainly review the literature in the field of double JPEG compression detection in recent years with two aspects,namely,the quantization table remains unchanged and the quantization table is inconsistent in the double JPEG compression process,Also,we will introduce some representative methods of double JPEG anti-detection in recent years.Finally,we analyze the problems existing in the field of double JPEG compression and give an outlook on the future development direction.
基金supported by the Technology Development Program(S3344882)funded by the Ministry of SMEs and Startups(MSS,Korea).
文摘In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the image to its original quality.The challenge lies in regenerating significantly compressed images into a state in which these become identifiable.Therefore,this study focuses on the restoration of JPEG images subjected to substantial degradation caused by maximum lossy compression using Generative Adversarial Networks(GAN).The generator in this network is based on theU-Net architecture.It features a newhourglass structure that preserves the characteristics of the deep layers.In addition,the network incorporates two loss functions to generate natural and high-quality images:Low Frequency(LF)loss and High Frequency(HF)loss.HF loss uses a pretrained VGG-16 network and is configured using a specific layer that best represents features.This can enhance the performance in the high-frequency region.In contrast,LF loss is used to handle the low-frequency region.The two loss functions facilitate the generation of images by the generator,which can mislead the discriminator while accurately generating high-and low-frequency regions.Consequently,by removing the blocking effects frommaximum lossy compressed images,images inwhich identities could be recognized are generated.This study represents a significant improvement over previous research in terms of the image resolution performance.
基金Supported by the Fundamental Research Funds for the Central Universities (No.500421126)。
文摘Detecting double Joint Photographic Experts Group (JPEG) compressionfor color images is vital in the field of image forensics. In previousresearches, there have been various approaches to detecting double JPEGcompression with different quantization matrices. However, the detectionof double JPEG color images with the same quantization matrix is stilla challenging task. An effective detection approach to extract features isproposed in this paper by combining traditional analysis with ConvolutionalNeural Networks (CNN). On the one hand, the number of nonzero pixels andthe sum of pixel values of color space conversion error are provided with 12-dimensional features through experiments. On the other hand, the roundingerror, the truncation error and the quantization coefficient matrix are used togenerate a total of 128-dimensional features via a specially designed CNN. Insuch aCNN, convolutional layers with fixed kernel of 1×1 and Dropout layersare adopted to prevent overfitting of the model, and an average pooling layeris used to extract local characteristics. In this approach, the Support VectorMachine (SVM) classifier is applied to distinguishwhether a given color imageis primarily or secondarily compressed. The approach is also suitable for thecase when customized needs are considered. The experimental results showthat the proposed approach is more effective than some existing ones whenthe compression quality factors are low.
文摘A block difference compression algorithm based on block PSNR, Which is one of the parameters of image quality is presented for image sequence processing. This algorithm adopts classical JPEG method in intra-frame coding, and processes 8×8 blocks of inter-frame with different methods depending on the results from the current PSNR compared with a threshold. The calculation of threshold and the situation of error accumulation based on different thresholds as well as the structure of code stream are also presented. The advantage of this algorithm is the reduction of the large operation volume during inter-frame processing.
文摘The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of compression. We present a comparative study of four approaches to compressing multispectral Mastcam images. The first approach is to divide the nine bands into three groups with each group having three bands. Since the multispectral bands have strong correlation, we treat the three groups of images as video frames. We call this approach the Video approach. The second approach is to compress each group separately and we call it the split band (SB) approach. The third one is to apply a two-step approach in which the first step uses principal component analysis (PCA) to compress a nine-band image cube to six bands and a second step compresses the six PCA bands using conventional codecs. The fourth one is to apply PCA only. In addition, we also present subjective and objective assessment results for compressing RGB images because RGB images have been used for stereo and disparity map generation. Five well-known compression codecs, including JPEG, JPEG-2000 (J2K), X264, X265, and Daala in the literature, have been applied and compared in each approach. The performance of different algorithms was assessed using four well-known performance metrics. Two are conventional and another two are known to have good correlation with human perception. Extensive experiments using actual Mastcam images have been performed to demonstrate the various approaches. We observed that perceptually lossless compression can be achieved at 10:1 compression ratio. In particular, the performance gain of the SB approach with Daala is at least 5 dBs in terms peak signal-to-noise ratio (PSNR) at 10:1 compression ratio over that of JPEG. Subjective comparisons also corroborated with the objective metrics in that perceptually lossless compression can be achieved even at 20 to 1 compression.
文摘A novel mathematical morphological approach for region of interest(ROI) automatic determination and JPEG2000-based coding of microscopy image compression is presented.The algorithm is very fast and requires lower computing power,which is particularly suitable for some irregular region-based cell microscopy images with poor qualities.Firstly,an active threshold-based method is discussed to create a rough mask of regions of interest(cells).And then some morphological operations are designed and applied to achieve the segmentation of cells.In addition,an extra morphological operation,dilation,is applied to create the final mask with some redundancies to avoid the"edge effect"after removing false cells.Finally,ROI and region of background(ROB) are obtained and encoded individually in different compression ratio flexibly based on the JPEG2000,which can adjust the quality between ROI and ROB without coding for ROI shape.The experimental results certify the effectiveness of the proposed algorithm,and compared with JPEG2000,the proposed algorithm has better performance in both subjective quality and objective quality at the same compression ratios.
文摘In this paper image quality of two types of compression methods, wavelet based and seam carving based are investigated. A metric is introduced to compare the image quality under wavelet and seam carving schemes. Meyer, Coiflet 2 and Jpeg2000 wavelet based methods are used as the wavelet based methods. Hausdorf distance based metric (HDM) is proposed and used for the comparison of the two compression methods instead of model based matching techniques and correspondence-based matching techniques, because there is no pairing of points in the two sets being compared. In addition entropy based metric (EM) or peak signal to noise ration based metric (PSNRM) cannot be used to compare the two schemes as the seam carving tends to deform the objects. The wavelet compressed images with different compression percentages were analyzed with HDM and EM and it was observed that HDM follows the EM/PSNRM for wavelet based compression methods. Then HDM is used to compare the wavelet and seam carved images for different compression percentages. The initial results showed that HDM is better metric for comparing wavelet based and seam carved images.
文摘In this paper, we introduce a novel approach to compress jointly a medical image and a multichannel bio-signals (e.g. ECG, EEG). This technique is based on the idea of Multimodal Compression (MC) which requires only one codec instead of multiple codecs. Objectively, biosignal samples are merged in the spatial domain of the image using a specific mixing function. Afterwards, the whole mixture is compressed using JPEG 2000. The spatial mixing function inserts samples in low-frequency regions, defined using a set of operations, including down-sampling, interpolation, and quad-tree decomposition. The decoding is achieved by inverting the process using a separation function. Results show that this technique allows better performances in terms of Compression Ratio (CR) compared to approaches which encode separately modalities. The reconstruction quality is evaluated on a set of test data using the PSNR (Peak Signal Noise Ratio) and the PRD (Percent Root Mean Square Difference), respectively for the image and biosignals.
文摘Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.