Backscatter electron analysis from scanning electron microscopes(BSE-SEM)produces high-resolution image data of both rock samples and thin-sections,showing detailed structural and geochemical(mineralogical)information...Backscatter electron analysis from scanning electron microscopes(BSE-SEM)produces high-resolution image data of both rock samples and thin-sections,showing detailed structural and geochemical(mineralogical)information.This allows an in-depth exploration of the rock microstructures and the coupled chemical characteristics in the BSE-SEM image to be made using image processing techniques.Although image processing is a powerful tool for revealing the more subtle data“hidden”in a picture,it is not a commonly employed method in geoscientific microstructural analysis.Here,we briefly introduce the general principles of image processing,and further discuss its application in studying rock microstructures using BSE-SEM image data.展开更多
A sixteen tree method of data compression of bilevel image is described.Thismethod has high efficiency,no information loss during compression,and easy to realize.
A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. ...A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.展开更多
The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image ar...The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image are taken as research objects. On the base of the traditional checking methods of printing quality,combining the method and theory of digital image processing with printing theory in the new domain of image quality checking,it constitute the checking system of printing quality by image processing,and expound the theory design and the model of this system. This is an application of machine vision. It uses the high resolution industrial CCD(Charge Coupled Device) colorful camera. It can display the real-time photographs on the monitor,and input the video signal to the image gathering card,and then the image data transmits through the computer PCI bus to the memory. At the same time,the system carries on processing and data analysis. This method is proved by experiments. The experiments are mainly about the data conversion of image and ink limit show of printing.展开更多
Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Seco...Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.展开更多
This paper introduces MapReduce as a distributed data processing model using open source Hadoop framework for manipulating large volume of data. The huge volume of data in the modern world, particularly multimedia dat...This paper introduces MapReduce as a distributed data processing model using open source Hadoop framework for manipulating large volume of data. The huge volume of data in the modern world, particularly multimedia data, creates new requirements for processing and storage. As an open source distributed computational framework, Hadoop allows for processing large amounts of images on an infinite set of computing nodes by providing necessary infrastructures. This paper introduces this framework, current works and its advantages and disadvantages.展开更多
In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis...In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.展开更多
Precision Livestock Farming studies are based on data that was measured from animals via technical devices. In the means of automation, it is usually not accounted for the animals’ reaction towards the devices or ind...Precision Livestock Farming studies are based on data that was measured from animals via technical devices. In the means of automation, it is usually not accounted for the animals’ reaction towards the devices or individual animal behaviour during the gathering of sensor data. In this study, 14 Holstein-Friesian cows were recorded with a 2D video camera while walking through a scanning passage comprising six Microsoft Kinect 3D cameras. Elementary behavioural traits like how long the cows avoided the passage, the time they needed to walk through or the number of times they stopped walking were assessed from the video footage and analysed with respect to the target variable “udder depth” that was calculated from the recorded 3D data using an automated procedure. Ten repeated passages were recorded of each cow. During the repetitions, the cows adjusted individually (p < 0.001) to the recording situations. The averaged total time to complete a passage (p = 0.05) and the averaged number of stops (p = 0.07) depended on the lactation numbers of the cows. The measurement precision of target variable “udder depth” was affected by the time the cows avoided the recording (p = 0.06) and by the time it took them to walk through the scanning passage (p = 0.03). Effects of animal behaviour during the collection of sensor data can alter the results and should, thus, be considered in the development of sensor based devices.展开更多
This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is i...This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is inferior only to the best transform KLT. Its vector quantization can maintain the minimum quantization distortions and greatly increase the compression ratio. In order to improve compression efficiency, an adaptive strategy of selecting reserved region patterns is applied to preserving the high energy at the same compression ratio. The experiment results show that they are satisfactory at the compression ration ratio if greater than 20.展开更多
Sonar image processing system is an important intelligent system of Autonomous Un-derwater Vehicle.Based on TMS320C30 high speed DSP,it is used to realize sonar imagecompression and underwater object detections includ...Sonar image processing system is an important intelligent system of Autonomous Un-derwater Vehicle.Based on TMS320C30 high speed DSP,it is used to realize sonar imagecompression and underwater object detections including obstacle recognition in real time.Inthis paper,the software and hardware designs of this system are introduced and the experi-mental results are given.展开更多
A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic ...A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper. By using this algorithm, an image can be condensely described as a fractal transform operator which is the combination of a set of fractal mappings. When the fractal transform operator is iteratedly applied to any initial image, a unique attractor (reconstructed image) can be achieved. In this paper) a dynamic fractal transform is presented which is a modification of the static transform. Instead of being fixed, the dynamic transform operator varies in each decoder iteration, thus differs from static transform operators. The new transform has advantages in improving coding efficiency and shows better convergence for the decoder.展开更多
A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i...A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.展开更多
An edge oriented image sequence coding scheme is presented. On the basis of edge detecting, an image could be divided into the sensitized region and the smooth region. In this scheme, the architecture of sensitized r...An edge oriented image sequence coding scheme is presented. On the basis of edge detecting, an image could be divided into the sensitized region and the smooth region. In this scheme, the architecture of sensitized region is approximated with linear type of segments. Then a rectangle belt is constructed for each segment. Finally, the gray value distribution in the region is fitted by normal forms polynomials. The model matching and motion analysis are also based on the architecture of sensitized region. For the smooth region we use the run length scanning and linear approximating. By means of normal forms polynomial fitting and motion prediction by matching, the images are compressed. It is shown through the simulations that the subjective quality of reconstructed picture is excellent at 0.0075 bit per pel.展开更多
In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide ...In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.展开更多
First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantizat...First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantization technique for image data compression.Thecombination cuts down the dimensions of vector coding.The size of the codebook can reasonablybe reduced.This method can reduce the computation complexity and pick up the vector codingprocess.Experiments using image processing system show that this method is very effective inthe field of image data compression.展开更多
This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures t...This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.展开更多
To compress screen image sequence in real-time remote and interactive applications,a novel compression method is proposed.The proposed method is named as CABHG.CABHG employs hybrid coding schemes that consist of intra...To compress screen image sequence in real-time remote and interactive applications,a novel compression method is proposed.The proposed method is named as CABHG.CABHG employs hybrid coding schemes that consist of intra-frame and inter-frame coding modes.The intra-frame coding is a rate-distortion optimized adaptive block size that can be also used for the compression of a single screen image.The inter-frame coding utilizes hierarchical group of pictures(GOP) structure to improve system performance during random accesses and fast-backward scans.Experimental results demonstrate that the proposed CABHG method has approximately 47%-48% higher compression ratio and 46%-53% lower CPU utilization than professional screen image sequence codecs such as TechSmith Ensharpen codec and Sorenson 3 codec.Compared with general video codecs such as H.264 codec,XviD MPEG-4 codec and Apple's Animation codec,CABHG also shows 87%-88% higher compression ratio and 64%-81% lower CPU utilization than these general video codecs.展开更多
Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image proces...Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.展开更多
The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper...The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.展开更多
In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the f...In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the first two were invented by other persons and the third one, by ourselves. As a result, the comparison among their compression rates is. given at the end of this paper. Further application of these image compression technique to satellite data and other meteorological data looks promising.展开更多
基金funded by the National Natural Science Foundation(No.42261134535)the National Key Research and Development Program(No.2023YFE0125000)+2 种基金the Frontiers Science Center for Deep-time Digital Earth(No.2652023001)the 111 Project of the Ministry of Science and Technology(No.BP0719021)supported by the department of Geology,University of Vienna(No.FA536901)。
文摘Backscatter electron analysis from scanning electron microscopes(BSE-SEM)produces high-resolution image data of both rock samples and thin-sections,showing detailed structural and geochemical(mineralogical)information.This allows an in-depth exploration of the rock microstructures and the coupled chemical characteristics in the BSE-SEM image to be made using image processing techniques.Although image processing is a powerful tool for revealing the more subtle data“hidden”in a picture,it is not a commonly employed method in geoscientific microstructural analysis.Here,we briefly introduce the general principles of image processing,and further discuss its application in studying rock microstructures using BSE-SEM image data.
文摘A sixteen tree method of data compression of bilevel image is described.Thismethod has high efficiency,no information loss during compression,and easy to realize.
基金Project supported by the Research Grants Council of the Hong Kong Special Administrative Region,China(Grant No.CityU123009)
文摘A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.
文摘The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image are taken as research objects. On the base of the traditional checking methods of printing quality,combining the method and theory of digital image processing with printing theory in the new domain of image quality checking,it constitute the checking system of printing quality by image processing,and expound the theory design and the model of this system. This is an application of machine vision. It uses the high resolution industrial CCD(Charge Coupled Device) colorful camera. It can display the real-time photographs on the monitor,and input the video signal to the image gathering card,and then the image data transmits through the computer PCI bus to the memory. At the same time,the system carries on processing and data analysis. This method is proved by experiments. The experiments are mainly about the data conversion of image and ink limit show of printing.
文摘Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.
文摘This paper introduces MapReduce as a distributed data processing model using open source Hadoop framework for manipulating large volume of data. The huge volume of data in the modern world, particularly multimedia data, creates new requirements for processing and storage. As an open source distributed computational framework, Hadoop allows for processing large amounts of images on an infinite set of computing nodes by providing necessary infrastructures. This paper introduces this framework, current works and its advantages and disadvantages.
基金Scientific Research Deanship has funded this project at the University of Ha’il–Saudi Arabia Ha’il–Saudi Arabia through project number RG-21104.
文摘In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.
文摘Precision Livestock Farming studies are based on data that was measured from animals via technical devices. In the means of automation, it is usually not accounted for the animals’ reaction towards the devices or individual animal behaviour during the gathering of sensor data. In this study, 14 Holstein-Friesian cows were recorded with a 2D video camera while walking through a scanning passage comprising six Microsoft Kinect 3D cameras. Elementary behavioural traits like how long the cows avoided the passage, the time they needed to walk through or the number of times they stopped walking were assessed from the video footage and analysed with respect to the target variable “udder depth” that was calculated from the recorded 3D data using an automated procedure. Ten repeated passages were recorded of each cow. During the repetitions, the cows adjusted individually (p < 0.001) to the recording situations. The averaged total time to complete a passage (p = 0.05) and the averaged number of stops (p = 0.07) depended on the lactation numbers of the cows. The measurement precision of target variable “udder depth” was affected by the time the cows avoided the recording (p = 0.06) and by the time it took them to walk through the scanning passage (p = 0.03). Effects of animal behaviour during the collection of sensor data can alter the results and should, thus, be considered in the development of sensor based devices.
文摘This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is inferior only to the best transform KLT. Its vector quantization can maintain the minimum quantization distortions and greatly increase the compression ratio. In order to improve compression efficiency, an adaptive strategy of selecting reserved region patterns is applied to preserving the high energy at the same compression ratio. The experiment results show that they are satisfactory at the compression ration ratio if greater than 20.
基金the High Technology Research and Development Programme of china.
文摘Sonar image processing system is an important intelligent system of Autonomous Un-derwater Vehicle.Based on TMS320C30 high speed DSP,it is used to realize sonar imagecompression and underwater object detections including obstacle recognition in real time.Inthis paper,the software and hardware designs of this system are introduced and the experi-mental results are given.
文摘A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper. By using this algorithm, an image can be condensely described as a fractal transform operator which is the combination of a set of fractal mappings. When the fractal transform operator is iteratedly applied to any initial image, a unique attractor (reconstructed image) can be achieved. In this paper) a dynamic fractal transform is presented which is a modification of the static transform. Instead of being fixed, the dynamic transform operator varies in each decoder iteration, thus differs from static transform operators. The new transform has advantages in improving coding efficiency and shows better convergence for the decoder.
基金This project was supported by the National Natural Science Foundation of China (60532060)Hainan Education Bureau Research Project (Hjkj200602)Hainan Natural Science Foundation (80551).
文摘A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.
文摘An edge oriented image sequence coding scheme is presented. On the basis of edge detecting, an image could be divided into the sensitized region and the smooth region. In this scheme, the architecture of sensitized region is approximated with linear type of segments. Then a rectangle belt is constructed for each segment. Finally, the gray value distribution in the region is fitted by normal forms polynomials. The model matching and motion analysis are also based on the architecture of sensitized region. For the smooth region we use the run length scanning and linear approximating. By means of normal forms polynomial fitting and motion prediction by matching, the images are compressed. It is shown through the simulations that the subjective quality of reconstructed picture is excellent at 0.0075 bit per pel.
基金Supported by the National Natural Science Foundation of China!( 6 9875 0 0 9)
文摘In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.
文摘First of all a simple and practical rectangular transform is given,and then thevector quantization technique which is rapidly developing recently is introduced.We combinethe rectangular transform with vector quantization technique for image data compression.Thecombination cuts down the dimensions of vector coding.The size of the codebook can reasonablybe reduced.This method can reduce the computation complexity and pick up the vector codingprocess.Experiments using image processing system show that this method is very effective inthe field of image data compression.
基金supported by the National Natural Science Foundation of China (Grant Nos. 60573172 and 60973152)the Superior University Doctor Subject Special Scientific Research Foundation of China (Grant No. 20070141014)the Natural Science Foundation of Liaoning Province of China (Grant No. 20082165)
文摘This paper proposes an efficient lossless image compression scheme for still images based on an adaptive arithmetic coding compression algorithm. The algorithm increases the image coding compression rate and ensures the quality of the decoded image combined with the adaptive probability model and predictive coding. The use of adaptive models for each encoded image block dynamically estimates the probability of the relevant image block. The decoded image block can accurately recover the encoded image according to the code book information. We adopt an adaptive arithmetic coding algorithm for image compression that greatly improves the image compression rate. The results show that it is an effective compression technology.
基金Project(60873230) supported by the National Natural Science Foundation of China
文摘To compress screen image sequence in real-time remote and interactive applications,a novel compression method is proposed.The proposed method is named as CABHG.CABHG employs hybrid coding schemes that consist of intra-frame and inter-frame coding modes.The intra-frame coding is a rate-distortion optimized adaptive block size that can be also used for the compression of a single screen image.The inter-frame coding utilizes hierarchical group of pictures(GOP) structure to improve system performance during random accesses and fast-backward scans.Experimental results demonstrate that the proposed CABHG method has approximately 47%-48% higher compression ratio and 46%-53% lower CPU utilization than professional screen image sequence codecs such as TechSmith Ensharpen codec and Sorenson 3 codec.Compared with general video codecs such as H.264 codec,XviD MPEG-4 codec and Apple's Animation codec,CABHG also shows 87%-88% higher compression ratio and 64%-81% lower CPU utilization than these general video codecs.
基金Supported by the National Natural Science Foun-dation of China (60173051) ,the Teaching and Research Award Pro-gramfor Outstanding Young Teachers in Higher Education Institu-tions of Ministry of Education of China ,and Liaoning Province HigherEducation Research Foundation (20040206)
文摘Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.
文摘The geological data are constructed in vector format in geographical information system (GIS) while other data such as remote sensing images, geographical data and geochemical data are saved in raster ones. This paper converts the vector data into 8 bit images according to their importance to mineralization each by programming. We can communicate the geological meaning with the raster images by this method. The paper also fuses geographical data and geochemical data with the programmed strata data. The result shows that image fusion can express different intensities effectively and visualize the structure characters in 2 dimensions. Furthermore, it also can produce optimized information from multi-source data and express them more directly.
文摘In this paper, three techniques, line run coding, quadtree DF (Depth-First) representation and H coding for compressing classified satellite cloud images with no distortion are presented. In these three codings, the first two were invented by other persons and the third one, by ourselves. As a result, the comparison among their compression rates is. given at the end of this paper. Further application of these image compression technique to satellite data and other meteorological data looks promising.