Backscatter electron analysis from scanning electron microscopes(BSE-SEM)produces high-resolution image data of both rock samples and thin-sections,showing detailed structural and geochemical(mineralogical)information...Backscatter electron analysis from scanning electron microscopes(BSE-SEM)produces high-resolution image data of both rock samples and thin-sections,showing detailed structural and geochemical(mineralogical)information.This allows an in-depth exploration of the rock microstructures and the coupled chemical characteristics in the BSE-SEM image to be made using image processing techniques.Although image processing is a powerful tool for revealing the more subtle data“hidden”in a picture,it is not a commonly employed method in geoscientific microstructural analysis.Here,we briefly introduce the general principles of image processing,and further discuss its application in studying rock microstructures using BSE-SEM image data.展开更多
In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis...In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.展开更多
A sixteen tree method of data compression of bilevel image is described.Thismethod has high efficiency,no information loss during compression,and easy to realize.
A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. ...A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.展开更多
The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image ar...The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image are taken as research objects. On the base of the traditional checking methods of printing quality,combining the method and theory of digital image processing with printing theory in the new domain of image quality checking,it constitute the checking system of printing quality by image processing,and expound the theory design and the model of this system. This is an application of machine vision. It uses the high resolution industrial CCD(Charge Coupled Device) colorful camera. It can display the real-time photographs on the monitor,and input the video signal to the image gathering card,and then the image data transmits through the computer PCI bus to the memory. At the same time,the system carries on processing and data analysis. This method is proved by experiments. The experiments are mainly about the data conversion of image and ink limit show of printing.展开更多
Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Seco...Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.展开更多
This paper introduces MapReduce as a distributed data processing model using open source Hadoop framework for manipulating large volume of data. The huge volume of data in the modern world, particularly multimedia dat...This paper introduces MapReduce as a distributed data processing model using open source Hadoop framework for manipulating large volume of data. The huge volume of data in the modern world, particularly multimedia data, creates new requirements for processing and storage. As an open source distributed computational framework, Hadoop allows for processing large amounts of images on an infinite set of computing nodes by providing necessary infrastructures. This paper introduces this framework, current works and its advantages and disadvantages.展开更多
Precision Livestock Farming studies are based on data that was measured from animals via technical devices. In the means of automation, it is usually not accounted for the animals’ reaction towards the devices or ind...Precision Livestock Farming studies are based on data that was measured from animals via technical devices. In the means of automation, it is usually not accounted for the animals’ reaction towards the devices or individual animal behaviour during the gathering of sensor data. In this study, 14 Holstein-Friesian cows were recorded with a 2D video camera while walking through a scanning passage comprising six Microsoft Kinect 3D cameras. Elementary behavioural traits like how long the cows avoided the passage, the time they needed to walk through or the number of times they stopped walking were assessed from the video footage and analysed with respect to the target variable “udder depth” that was calculated from the recorded 3D data using an automated procedure. Ten repeated passages were recorded of each cow. During the repetitions, the cows adjusted individually (p < 0.001) to the recording situations. The averaged total time to complete a passage (p = 0.05) and the averaged number of stops (p = 0.07) depended on the lactation numbers of the cows. The measurement precision of target variable “udder depth” was affected by the time the cows avoided the recording (p = 0.06) and by the time it took them to walk through the scanning passage (p = 0.03). Effects of animal behaviour during the collection of sensor data can alter the results and should, thus, be considered in the development of sensor based devices.展开更多
This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is i...This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is inferior only to the best transform KLT. Its vector quantization can maintain the minimum quantization distortions and greatly increase the compression ratio. In order to improve compression efficiency, an adaptive strategy of selecting reserved region patterns is applied to preserving the high energy at the same compression ratio. The experiment results show that they are satisfactory at the compression ration ratio if greater than 20.展开更多
Sonar image processing system is an important intelligent system of Autonomous Un-derwater Vehicle.Based on TMS320C30 high speed DSP,it is used to realize sonar imagecompression and underwater object detections includ...Sonar image processing system is an important intelligent system of Autonomous Un-derwater Vehicle.Based on TMS320C30 high speed DSP,it is used to realize sonar imagecompression and underwater object detections including obstacle recognition in real time.Inthis paper,the software and hardware designs of this system are introduced and the experi-mental results are given.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
A detector's nondestructive readout mode allows its pixels to be read multiple times during integration,enabling generation of a series of"up-the-ramp"images that continuously accumulate photons between ...A detector's nondestructive readout mode allows its pixels to be read multiple times during integration,enabling generation of a series of"up-the-ramp"images that continuously accumulate photons between successive frames.Because noise is correlated across these images,optimal stacking generally requires the images to be weighted unequally to achieve the best possible target signal-to-noise ratio(SNR).Objects in the sky present wildly varied brightness characteristics,and the counts in individual pixels of the same object can also span wide ranges.Therefore,a single set of weights cannot be optimal in all cases.To ensure that the stacked image is easily calibratable,we apply the same weight to all pixels within the same frame.In practice,results for high-SNR cases degraded only slightly when we used weights derived for low-SNR cases,whereas the low-SNR cases remained more sensitive to the weights.Therefore,we propose a quasi-optimal stacking method that maximizes the stacked SNR for the case where the RSN=1 per pixel in the last frame and use simulated data to demonstrate that this approach enhances the SNR more strongly than the equal-weight stacking and ramp fitting methods.Furthermore,we estimate the improvements in the limiting magnitudes for the China Space Station Telescope using the proposed method.When compared with the conventional readout mode,which is equivalent to selecting the last frame from the nondestructive readout,stacking 30 up-the-ramp images can improve the limiting magnitude by approximately 0.5 mag for the telescope's near-infrared observations,effectively reducing readout noise by approximately 62%.展开更多
Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerabil...Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerability.Therefore,classifying plaque risk constitutes one of themost critical tasks in the clinicalmanagement of this condition.While classification models derived from individual medical centers have been extensively investigated,these singlecenter models often fail to generalize well to multi-center data due to variations in ultrasound images caused by differences in physician expertise and equipment.To address this limitation,a Dual-Classifier Label Correction Networkmodel(DCLCN)is proposed for the classification of carotid plaque ultrasound images acrossmultiplemedical centers.TheDCLCNdesigns amulti-center domain adaptationmodule that leverages a dual-classifier strategy to extract knowledge from both source and target centers,thereby reducing feature discrepancies through a domain adaptation layer.Additionally,to mitigate the impact of image noise,a label modeling and correction module is introduced to generate pseudo-labels for the target centers and iteratively refine them using an end-to-end correction mechanism.Experiments on the carotid plaque dataset collected fromthreemedical centers demonstrate that the DCLCN achieves commendable performance and robustness.展开更多
The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ...The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.展开更多
A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic ...A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper. By using this algorithm, an image can be condensely described as a fractal transform operator which is the combination of a set of fractal mappings. When the fractal transform operator is iteratedly applied to any initial image, a unique attractor (reconstructed image) can be achieved. In this paper) a dynamic fractal transform is presented which is a modification of the static transform. Instead of being fixed, the dynamic transform operator varies in each decoder iteration, thus differs from static transform operators. The new transform has advantages in improving coding efficiency and shows better convergence for the decoder.展开更多
Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurat...Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurate classification.Enhancing the visibility of these elusive cell features helps train robust deep-learning models.However,the selection and application of image processing techniques for such enhancement have not been systematically explored in the research community.To address this challenge,we introduce Salient Features Guided Augmentation(SFGA),an approach that strategically integrates machine learning and image processing.SFGA utilizes machine learning algorithms to identify crucial features within cell images,subsequently mapping these features to appropriate image processing techniques to enhance training images.By emphasizing salient features and aligning them with corresponding image processing methods,SFGA is designed to enhance the discriminating power of deep learning models in cell classification tasks.Our research undertakes a series of experiments,each exploring the performance of different datasets and data enhancement techniques in classifying cell types,highlighting the significance of data quality and enhancement in mitigating overfitting and distinguishing cell characteristics.Specifically,SFGA focuses on identifying tumor cells from tissue for extranodal extension detection,with the SFGA-enhanced dataset showing notable advantages in accuracy.We conducted a preliminary study of five experiments,among which the accuracy of the pleomorphism experiment improved significantly from 50.81%to 95.15%.The accuracy of the other four experiments also increased,with improvements ranging from 3 to 43 percentage points.Our preliminary study shows the possibilities to enhance the diagnostic accuracy of deep learning models and proposes a systematic approach that could enhance cancer diagnosis,contributing as a first step in using SFGA in medical image enhancement.展开更多
In the task of classifying massive celestial data,the accurate classification of galaxies,stars,and quasars usually relies on spectral labels.However,spectral data account for only a small fraction of all astronomical...In the task of classifying massive celestial data,the accurate classification of galaxies,stars,and quasars usually relies on spectral labels.However,spectral data account for only a small fraction of all astronomical observation data,and the target source classification information in vast photometric data has not been accurately measured.To address this,we propose a novel deep learning-based algorithm,YL8C4Net,for the automatic detection and classification of target sources in photometric images.This algorithm combines the YOLOv8 detection network with the Conv4Net classification network.Additionally,we propose a novel magnitude-based labeling method for target source annotation.In the performance evaluation,the YOLOv8 achieves impressive performance with average precision scores of 0.824 for AP@0.5 and 0.795 for AP@0.5:0.95.Meanwhile,the constructed Conv4Net attains an accuracy of 0.8895.Overall,YL8C4Net offers the advantages of fewer parameters,faster processing speed,and higher classification accuracy,making it particularly suitable for large-scale data processing tasks.Furthermore,we employed the YL8C4Net model to conduct target source detection and classification on photometric images from 20 sky regions in SDSS-DR17.As a result,a catalog containing about 9.39 million target source classification results has been preliminarily constructed,thereby providing valuable reference data for astronomical research.展开更多
The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the ...The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.展开更多
The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small e...The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.展开更多
A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, i...A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.展开更多
基金funded by the National Natural Science Foundation(No.42261134535)the National Key Research and Development Program(No.2023YFE0125000)+2 种基金the Frontiers Science Center for Deep-time Digital Earth(No.2652023001)the 111 Project of the Ministry of Science and Technology(No.BP0719021)supported by the department of Geology,University of Vienna(No.FA536901)。
文摘Backscatter electron analysis from scanning electron microscopes(BSE-SEM)produces high-resolution image data of both rock samples and thin-sections,showing detailed structural and geochemical(mineralogical)information.This allows an in-depth exploration of the rock microstructures and the coupled chemical characteristics in the BSE-SEM image to be made using image processing techniques.Although image processing is a powerful tool for revealing the more subtle data“hidden”in a picture,it is not a commonly employed method in geoscientific microstructural analysis.Here,we briefly introduce the general principles of image processing,and further discuss its application in studying rock microstructures using BSE-SEM image data.
基金Scientific Research Deanship has funded this project at the University of Ha’il–Saudi Arabia Ha’il–Saudi Arabia through project number RG-21104.
文摘In today’s world,image processing techniques play a crucial role in the prognosis and diagnosis of various diseases due to the development of several precise and accurate methods for medical images.Automated analysis of medical images is essential for doctors,as manual investigation often leads to inter-observer variability.This research aims to enhance healthcare by enabling the early detection of diabetic retinopathy through an efficient image processing framework.The proposed hybridized method combines Modified Inertia Weight Particle Swarm Optimization(MIWPSO)and Fuzzy C-Means clustering(FCM)algorithms.Traditional FCM does not incorporate spatial neighborhood features,making it highly sensitive to noise,which significantly affects segmentation output.Our method incorporates a modified FCM that includes spatial functions in the fuzzy membership matrix to eliminate noise.The results demonstrate that the proposed FCM-MIWPSO method achieves highly precise and accurate medical image segmentation.Furthermore,segmented images are classified as benign or malignant using the Decision Tree-Based Temporal Association Rule(DT-TAR)Algorithm.Comparative analysis with existing state-of-the-art models indicates that the proposed FCM-MIWPSO segmentation technique achieves a remarkable accuracy of 98.42%on the dataset,highlighting its significant impact on improving diagnostic capabilities in medical imaging.
文摘A sixteen tree method of data compression of bilevel image is described.Thismethod has high efficiency,no information loss during compression,and easy to realize.
基金Project supported by the Research Grants Council of the Hong Kong Special Administrative Region,China(Grant No.CityU123009)
文摘A chaos-based cryptosystem for fractal image coding is proposed. The Renyi chaotic map is employed to determine the order of processing the range blocks and to generate the keystream for masking the encoded sequence. Compared with the standard approach of fraetal image coding followed by the Advanced Encryption Standard, our scheme offers a higher sensitivity to both plaintext and ciphertext at a comparable operating efficiency. The keystream generated by the Renyi chaotic map passes the randomness tests set by the United States National Institute of Standards and Technology, and so the proposed scheme is sensitive to the key.
文摘The traditional printing checking method always uses printing control strips,but the results are not very well in repeatability and stability. In this paper,the checking methods for printing quality basing on image are taken as research objects. On the base of the traditional checking methods of printing quality,combining the method and theory of digital image processing with printing theory in the new domain of image quality checking,it constitute the checking system of printing quality by image processing,and expound the theory design and the model of this system. This is an application of machine vision. It uses the high resolution industrial CCD(Charge Coupled Device) colorful camera. It can display the real-time photographs on the monitor,and input the video signal to the image gathering card,and then the image data transmits through the computer PCI bus to the memory. At the same time,the system carries on processing and data analysis. This method is proved by experiments. The experiments are mainly about the data conversion of image and ink limit show of printing.
文摘Based on Jacquin's work. this paper presents an adaptive block-based fractal image coding scheme. Firstly. masking functions are used to classify range blocks and weight the mean Square error (MSE) of images. Secondly, an adaptive block partition scheme is introduced by developing the quadtree partition method. Thirdly. a piecewise uniform quantization strategy is appled to quantize the luminance shifting. Finally. experiment results are shown and compared with what reported by Jacquin and Lu to verify the validity of the methods addressed by the authors.
文摘This paper introduces MapReduce as a distributed data processing model using open source Hadoop framework for manipulating large volume of data. The huge volume of data in the modern world, particularly multimedia data, creates new requirements for processing and storage. As an open source distributed computational framework, Hadoop allows for processing large amounts of images on an infinite set of computing nodes by providing necessary infrastructures. This paper introduces this framework, current works and its advantages and disadvantages.
文摘Precision Livestock Farming studies are based on data that was measured from animals via technical devices. In the means of automation, it is usually not accounted for the animals’ reaction towards the devices or individual animal behaviour during the gathering of sensor data. In this study, 14 Holstein-Friesian cows were recorded with a 2D video camera while walking through a scanning passage comprising six Microsoft Kinect 3D cameras. Elementary behavioural traits like how long the cows avoided the passage, the time they needed to walk through or the number of times they stopped walking were assessed from the video footage and analysed with respect to the target variable “udder depth” that was calculated from the recorded 3D data using an automated procedure. Ten repeated passages were recorded of each cow. During the repetitions, the cows adjusted individually (p < 0.001) to the recording situations. The averaged total time to complete a passage (p = 0.05) and the averaged number of stops (p = 0.07) depended on the lactation numbers of the cows. The measurement precision of target variable “udder depth” was affected by the time the cows avoided the recording (p = 0.06) and by the time it took them to walk through the scanning passage (p = 0.03). Effects of animal behaviour during the collection of sensor data can alter the results and should, thus, be considered in the development of sensor based devices.
文摘This paper presents a new method for image coding and compressing-ADCTVQ(Adptive Discrete Cosine Transform Vector Quantization). In this method, DCT conforms to visual properties and has an encoding ability which is inferior only to the best transform KLT. Its vector quantization can maintain the minimum quantization distortions and greatly increase the compression ratio. In order to improve compression efficiency, an adaptive strategy of selecting reserved region patterns is applied to preserving the high energy at the same compression ratio. The experiment results show that they are satisfactory at the compression ration ratio if greater than 20.
基金the High Technology Research and Development Programme of china.
文摘Sonar image processing system is an important intelligent system of Autonomous Un-derwater Vehicle.Based on TMS320C30 high speed DSP,it is used to realize sonar imagecompression and underwater object detections including obstacle recognition in real time.Inthis paper,the software and hardware designs of this system are introduced and the experi-mental results are given.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
基金supported by the National Key R&D Program of China (2022YFF0503400)the National Natural Science Foundation of China grant (U1931208)China Manned Space Program through its Space Application System.
文摘A detector's nondestructive readout mode allows its pixels to be read multiple times during integration,enabling generation of a series of"up-the-ramp"images that continuously accumulate photons between successive frames.Because noise is correlated across these images,optimal stacking generally requires the images to be weighted unequally to achieve the best possible target signal-to-noise ratio(SNR).Objects in the sky present wildly varied brightness characteristics,and the counts in individual pixels of the same object can also span wide ranges.Therefore,a single set of weights cannot be optimal in all cases.To ensure that the stacked image is easily calibratable,we apply the same weight to all pixels within the same frame.In practice,results for high-SNR cases degraded only slightly when we used weights derived for low-SNR cases,whereas the low-SNR cases remained more sensitive to the weights.Therefore,we propose a quasi-optimal stacking method that maximizes the stacked SNR for the case where the RSN=1 per pixel in the last frame and use simulated data to demonstrate that this approach enhances the SNR more strongly than the equal-weight stacking and ramp fitting methods.Furthermore,we estimate the improvements in the limiting magnitudes for the China Space Station Telescope using the proposed method.When compared with the conventional readout mode,which is equivalent to selecting the last frame from the nondestructive readout,stacking 30 up-the-ramp images can improve the limiting magnitude by approximately 0.5 mag for the telescope's near-infrared observations,effectively reducing readout noise by approximately 62%.
基金supported by Shanghai Technical Service Computing Center of Science and Engineering,Shanghai University.
文摘Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerability.Therefore,classifying plaque risk constitutes one of themost critical tasks in the clinicalmanagement of this condition.While classification models derived from individual medical centers have been extensively investigated,these singlecenter models often fail to generalize well to multi-center data due to variations in ultrasound images caused by differences in physician expertise and equipment.To address this limitation,a Dual-Classifier Label Correction Networkmodel(DCLCN)is proposed for the classification of carotid plaque ultrasound images acrossmultiplemedical centers.TheDCLCNdesigns amulti-center domain adaptationmodule that leverages a dual-classifier strategy to extract knowledge from both source and target centers,thereby reducing feature discrepancies through a domain adaptation layer.Additionally,to mitigate the impact of image noise,a label modeling and correction module is introduced to generate pseudo-labels for the target centers and iteratively refine them using an end-to-end correction mechanism.Experiments on the carotid plaque dataset collected fromthreemedical centers demonstrate that the DCLCN achieves commendable performance and robustness.
文摘The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.
文摘A recent trend in computer graphics and image processing is to use Iterated Function System (IFS) to generate and describe both man-made graphics and natural images. Jacquin was the first to propose a fully automatic gray scale image compression algorithm which is referred to as a typical static fractal transform based algorithm in this paper. By using this algorithm, an image can be condensely described as a fractal transform operator which is the combination of a set of fractal mappings. When the fractal transform operator is iteratedly applied to any initial image, a unique attractor (reconstructed image) can be achieved. In this paper) a dynamic fractal transform is presented which is a modification of the static transform. Instead of being fixed, the dynamic transform operator varies in each decoder iteration, thus differs from static transform operators. The new transform has advantages in improving coding efficiency and shows better convergence for the decoder.
基金supported by grants fromthe North China University of Technology Research Start-Up Fund(11005136024XN147-14)and(110051360024XN151-97)Guangzhou Development Zone Science and Technology Project(2023GH02)+4 种基金the National Key R&D Program of China(2021YFE0201100 and 2022YFA1103401 to Juntao Gao)National Natural Science Foundation of China(981890991 to Juntao Gao)Beijing Municipal Natural Science Foundation(Z200021 to Juntao Gao)CAS Interdisciplinary Innovation Team(JCTD-2020-04 to Juntao Gao)0032/2022/A,by Macao FDCT,and MYRG2022-00271-FST.
文摘Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurate classification.Enhancing the visibility of these elusive cell features helps train robust deep-learning models.However,the selection and application of image processing techniques for such enhancement have not been systematically explored in the research community.To address this challenge,we introduce Salient Features Guided Augmentation(SFGA),an approach that strategically integrates machine learning and image processing.SFGA utilizes machine learning algorithms to identify crucial features within cell images,subsequently mapping these features to appropriate image processing techniques to enhance training images.By emphasizing salient features and aligning them with corresponding image processing methods,SFGA is designed to enhance the discriminating power of deep learning models in cell classification tasks.Our research undertakes a series of experiments,each exploring the performance of different datasets and data enhancement techniques in classifying cell types,highlighting the significance of data quality and enhancement in mitigating overfitting and distinguishing cell characteristics.Specifically,SFGA focuses on identifying tumor cells from tissue for extranodal extension detection,with the SFGA-enhanced dataset showing notable advantages in accuracy.We conducted a preliminary study of five experiments,among which the accuracy of the pleomorphism experiment improved significantly from 50.81%to 95.15%.The accuracy of the other four experiments also increased,with improvements ranging from 3 to 43 percentage points.Our preliminary study shows the possibilities to enhance the diagnostic accuracy of deep learning models and proposes a systematic approach that could enhance cancer diagnosis,contributing as a first step in using SFGA in medical image enhancement.
基金supported by the National Natural Science Foundation of China (NSFC, Grant No. U1731128)
文摘In the task of classifying massive celestial data,the accurate classification of galaxies,stars,and quasars usually relies on spectral labels.However,spectral data account for only a small fraction of all astronomical observation data,and the target source classification information in vast photometric data has not been accurately measured.To address this,we propose a novel deep learning-based algorithm,YL8C4Net,for the automatic detection and classification of target sources in photometric images.This algorithm combines the YOLOv8 detection network with the Conv4Net classification network.Additionally,we propose a novel magnitude-based labeling method for target source annotation.In the performance evaluation,the YOLOv8 achieves impressive performance with average precision scores of 0.824 for AP@0.5 and 0.795 for AP@0.5:0.95.Meanwhile,the constructed Conv4Net attains an accuracy of 0.8895.Overall,YL8C4Net offers the advantages of fewer parameters,faster processing speed,and higher classification accuracy,making it particularly suitable for large-scale data processing tasks.Furthermore,we employed the YL8C4Net model to conduct target source detection and classification on photometric images from 20 sky regions in SDSS-DR17.As a result,a catalog containing about 9.39 million target source classification results has been preliminarily constructed,thereby providing valuable reference data for astronomical research.
基金funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University,through the Research Funding Program,Grant No.(FRP-1443-15).
文摘The analysis of Android malware shows that this threat is constantly increasing and is a real threat to mobile devices since traditional approaches,such as signature-based detection,are no longer effective due to the continuously advancing level of sophistication.To resolve this problem,efficient and flexible malware detection tools are needed.This work examines the possibility of employing deep CNNs to detect Android malware by transforming network traffic into image data representations.Moreover,the dataset used in this study is the CIC-AndMal2017,which contains 20,000 instances of network traffic across five distinct malware categories:a.Trojan,b.Adware,c.Ransomware,d.Spyware,e.Worm.These network traffic features are then converted to image formats for deep learning,which is applied in a CNN framework,including the VGG16 pre-trained model.In addition,our approach yielded high performance,yielding an accuracy of 0.92,accuracy of 99.1%,precision of 98.2%,recall of 99.5%,and F1 score of 98.7%.Subsequent improvements to the classification model through changes within the VGG19 framework improved the classification rate to 99.25%.Through the results obtained,it is clear that CNNs are a very effective way to classify Android malware,providing greater accuracy than conventional techniques.The success of this approach also shows the applicability of deep learning in mobile security along with the direction for the future advancement of the real-time detection system and other deeper learning techniques to counter the increasing number of threats emerging in the future.
基金supported by the National Key R&D Program of China(grant No.2022YFF0503800)by the National Natural Science Foundation of China(NSFC)(grant No.11427901)+1 种基金by the Strategic Priority Research Program of the Chinese Academy of Sciences(CAS-SPP)(grant No.XDA15320102)by the Youth Innovation Promotion Association(CAS No.2022057)。
文摘The Solar Polar-orbit Observatory(SPO),proposed by Chinese scientists,is designed to observe the solar polar regions in an unprecedented way with a spacecraft traveling in a large solar inclination angle and a small ellipticity.However,one of the most significant challenges lies in ultra-long-distance data transmission,particularly for the Magnetic and Helioseismic Imager(MHI),which is the most important payload and generates the largest volume of data in SPO.In this paper,we propose a tailored lossless data compression method based on the measurement mode and characteristics of MHI data.The background out of the solar disk is removed to decrease the pixel number of an image under compression.Multiple predictive coding methods are combined to eliminate the redundancy utilizing the correlation(space,spectrum,and polarization)in data set,improving the compression ratio.Experimental results demonstrate that our method achieves an average compression ratio of 3.67.The compression time is also less than the general observation period.The method exhibits strong feasibility and can be easily adapted to MHI.
基金This project was supported by the National Natural Science Foundation of China (60532060)Hainan Education Bureau Research Project (Hjkj200602)Hainan Natural Science Foundation (80551).
文摘A nonlinear data analysis algorithm, namely empirical data decomposition (EDD) is proposed, which can perform adaptive analysis of observed data. Analysis filter, which is not a linear constant coefficient filter, is automatically determined by observed data, and is able to implement multi-resolution analysis as wavelet transform. The algorithm is suitable for analyzing non-stationary data and can effectively wipe off the relevance of observed data. Then through discussing the applications of EDD in image compression, the paper presents a 2-dimension data decomposition framework and makes some modifications of contexts used by Embedded Block Coding with Optimized Truncation (EBCOT) . Simulation results show that EDD is more suitable for non-stationary image data compression.