Important in many different sectors of the industry, the determination of stream velocity has become more and more important due to measurements precision necessity, in order to determine the right production rates, d...Important in many different sectors of the industry, the determination of stream velocity has become more and more important due to measurements precision necessity, in order to determine the right production rates, determine the volumetric production of undesired fluid, establish automated controls based on these measurements avoiding over-flooding or over-production, guaranteeing accurate predictive maintenance, etc. Difficulties being faced have been the determination of the velocity of specific fluids embedded in some others, for example, determining the gas bubbles stream velocity flowing throughout liquid fluid phase. Although different and already applicable methods have been researched and already implemented within the industry, a non-intrusive automated way of providing those stream velocities has its importance, and may have a huge impact in projects budget. Knowing the importance of its determination, this developed script uses a methodology of breaking-down real-time videos media into frame images, analyzing by pixel correlations possible superposition matches for further gas bubbles stream velocity estimation. In raw sense, the script bases itself in functions and procedures already available in MatLab, which can be used for image processing and treatments, allowing the methodology to be implemented. Its accuracy after the running test was of around 97% (ninety-seven percent);the raw source code with comments had almost 3000 (three thousand) characters;and the hardware placed for running the code was an Intel Core Duo 2.13 [Ghz] and 2 [Gb] RAM memory capable workstation. Even showing good results, it could be stated that just the end point correlations were actually getting to the final solution. So that, making use of self-learning functions or neural network, one could surely enhance the capability of the application to be run in real-time without getting exhaust by iterative loops.展开更多
Using the Radon transform and morphological image processing, an algorithm for ship's wake detection in the SAR (synthetic aperture radar) image is developed. Being manipulated in the Radon space to invert the gra...Using the Radon transform and morphological image processing, an algorithm for ship's wake detection in the SAR (synthetic aperture radar) image is developed. Being manipulated in the Radon space to invert the gray-level and binary images, the linear texture of ship wake in oceanic clutter can be well detected. It has been applied to the automatic detection of a moving ship from the SEASAT SAR image. The results show that this algorithm is well robust in a strong noisy background and is not very sensitive to the threshold parameter and the working window size.展开更多
Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometri...Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.展开更多
A topic studied in cartography is to make the extraction of cartographic features that provide the update of cartographic maps more easily. For this reason many automatic routines were created with the intent to perfo...A topic studied in cartography is to make the extraction of cartographic features that provide the update of cartographic maps more easily. For this reason many automatic routines were created with the intent to perform the features extraction. Despite of all studies about this, some features cannot be found by the algorithm or it can extract some pixels unduly. So the current article aims to show the results with the software development that uses the original and reference image to calculate some statistics about the extraction process. Furthermore, the calculated statistics can be used to evaluate the extraction process.展开更多
This paper presents a method for determining the percentage of coarse aggregate in concrete specimens by image processing. The test pieces were produced with the aim of obtaining images of their cross sections through...This paper presents a method for determining the percentage of coarse aggregate in concrete specimens by image processing. The test pieces were produced with the aim of obtaining images of their cross sections through a scanner table. In order to increase the contrast between mortar and coarse aggregate the sliced surfaces were treated with the phenolphthale in solution. The images obtained in the scanner were processed in a program developed with MATLAB (matrix laboratory). The average coarse aggregate in each section and the mean of coarse aggregate per test body were calculated. With the results, it was revealed that the method returned satisfying results when compared to the original trace of the concrete.展开更多
In digital signal processing,image enhancement or image denoising are challenging task to preserve pixel quality.There are several approaches from conventional to deep learning that are used to resolve such issues.But...In digital signal processing,image enhancement or image denoising are challenging task to preserve pixel quality.There are several approaches from conventional to deep learning that are used to resolve such issues.But they still face challenges in terms of computational requirements,overfitting and generalization issues,etc.To resolve such issues,optimization algorithms provide greater control and transparency in designing digital filters for image enhancement and denoising.Therefore,this paper presented a novel denoising approach for medical applications using an Optimized Learning⁃based Multi⁃level discrete Wavelet Cascaded Convolutional Neural Network(OLMWCNN).In this approach,the optimal filter parameters are identified to preserve the image quality after denoising.The performance and efficiency of the OLMWCNN filter are evaluated,demonstrating significant progress in denoising medical images while overcoming the limitations of conventional methods.展开更多
Osteosarcomas are malignant neoplasms derived from undifferentiated osteogenic mesenchymal cells. It causes severe and permanent damage to human tissue and has a high mortality rate. The condition has the capacity to ...Osteosarcomas are malignant neoplasms derived from undifferentiated osteogenic mesenchymal cells. It causes severe and permanent damage to human tissue and has a high mortality rate. The condition has the capacity to occur in any bone;however, it often impacts long bones like the arms and legs. Prompt identification and prompt intervention are essential for augmenting patient longevity. However, the intricate composition and erratic placement of osteosarcoma provide difficulties for clinicians in accurately determining the scope of the afflicted area. There is a pressing requirement for developing an algorithm that can automatically detect bone tumors with tremendous accuracy. Therefore, in this study, we proposed a novel feature extractor framework associated with a supervised three-class XGBoost algorithm for the detection of osteosarcoma in whole slide histopathology images. This method allows for quicker and more effective data analysis. The first step involves preprocessing the imbalanced histopathology dataset, followed by augmentation and balancing utilizing two techniques: SMOTE and ADASYN. Next, a unique feature extraction framework is used to extract features, which are then inputted into the supervised three-class XGBoost algorithm for classification into three categories: non-tumor, viable tumor, and non-viable tumor. The experimental findings indicate that the proposed model exhibits superior efficiency, accuracy, and a more lightweight design in comparison to other current models for osteosarcoma detection.展开更多
Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurat...Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurate classification.Enhancing the visibility of these elusive cell features helps train robust deep-learning models.However,the selection and application of image processing techniques for such enhancement have not been systematically explored in the research community.To address this challenge,we introduce Salient Features Guided Augmentation(SFGA),an approach that strategically integrates machine learning and image processing.SFGA utilizes machine learning algorithms to identify crucial features within cell images,subsequently mapping these features to appropriate image processing techniques to enhance training images.By emphasizing salient features and aligning them with corresponding image processing methods,SFGA is designed to enhance the discriminating power of deep learning models in cell classification tasks.Our research undertakes a series of experiments,each exploring the performance of different datasets and data enhancement techniques in classifying cell types,highlighting the significance of data quality and enhancement in mitigating overfitting and distinguishing cell characteristics.Specifically,SFGA focuses on identifying tumor cells from tissue for extranodal extension detection,with the SFGA-enhanced dataset showing notable advantages in accuracy.We conducted a preliminary study of five experiments,among which the accuracy of the pleomorphism experiment improved significantly from 50.81%to 95.15%.The accuracy of the other four experiments also increased,with improvements ranging from 3 to 43 percentage points.Our preliminary study shows the possibilities to enhance the diagnostic accuracy of deep learning models and proposes a systematic approach that could enhance cancer diagnosis,contributing as a first step in using SFGA in medical image enhancement.展开更多
Breast cancer remains one of the most pressing global health concerns,and early detection plays a crucial role in improving survival rates.Integrating digital mammography with computational techniques and advanced ima...Breast cancer remains one of the most pressing global health concerns,and early detection plays a crucial role in improving survival rates.Integrating digital mammography with computational techniques and advanced image processing has significantly enhanced the ability to identify abnormalities.However,existing methodologies face persistent challenges,including low image contrast,noise interference,and inaccuracies in segmenting regions of interest.To address these limitations,this study introduces a novel computational framework for analyzing mammographic images,evaluated using the Mammographic Image Analysis Society(MIAS)dataset comprising 322 samples.The proposed methodology follows a structured three-stage approach.Initially,mammographic scans are classified using the Breast Imaging Reporting and Data System(BI-RADS),ensuring systematic and standardized image analysis.Next,the pectoral muscle,which can interfere with accurate segmentation,is effectively removed to refine the region of interest(ROI).The final stage involves an advanced image pre-processing module utilizing Independent Component Analysis(ICA)to enhance contrast,suppress noise,and improve image clarity.Following these enhancements,a robust segmentation technique is employed to delineated abnormal regions.Experimental results validate the efficiency of the proposed framework,demonstrating a significant improvement in the Effective Measure of Enhancement(EME)and a 3 dB increase in Peak Signal-to-Noise Ratio(PSNR),indicating superior image quality.The model also achieves an accuracy of approximately 97%,surpassing contemporary techniques evaluated on the MIAS dataset.Furthermore,its ability to process mammograms across all BI-RADS categories highlights its adaptability and reliability for clinical applications.This study presents an advanced and dependable computational framework for mammographic image analysis,effectively addressing critical challenges in noise reduction,contrast enhancement,and segmentation precision.The proposed approach lays the groundwork for seamless integration into computer-aided diagnostic(CAD)systems,with the potential to significantly enhance early breast cancer detection and contribute to improved patient outcomes.展开更多
In micro milling machining,tool wear directly affects workpiece quality and accuracy,making effective tool wear monitoring a key factor in ensuring product integrity.The use of machine vision-based methods can provide...In micro milling machining,tool wear directly affects workpiece quality and accuracy,making effective tool wear monitoring a key factor in ensuring product integrity.The use of machine vision-based methods can provide an intuitive and efficient representation of tool wear conditions.However,micro milling tools have non-flat flanks,thin coatings can peel off,and spindle orientation is uncertain during downtime.These factors result in low pixel values,uneven illumination,and arbitrary tool position.To address this,we propose an image-based tool wear monitoring method.It combines multiple algorithms to restore lost pixels due to uneven illumination during segmentation and accurately extract wear areas.Experimental results demonstrate that the proposed algorithm exhibits high robustness to such images,effectively addressing the effects of illumination and spindle orientation.Additionally,the algorithm has low complexity,fast execution time,and significantly reduces the detection time in situ.展开更多
Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerabil...Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerability.Therefore,classifying plaque risk constitutes one of themost critical tasks in the clinicalmanagement of this condition.While classification models derived from individual medical centers have been extensively investigated,these singlecenter models often fail to generalize well to multi-center data due to variations in ultrasound images caused by differences in physician expertise and equipment.To address this limitation,a Dual-Classifier Label Correction Networkmodel(DCLCN)is proposed for the classification of carotid plaque ultrasound images acrossmultiplemedical centers.TheDCLCNdesigns amulti-center domain adaptationmodule that leverages a dual-classifier strategy to extract knowledge from both source and target centers,thereby reducing feature discrepancies through a domain adaptation layer.Additionally,to mitigate the impact of image noise,a label modeling and correction module is introduced to generate pseudo-labels for the target centers and iteratively refine them using an end-to-end correction mechanism.Experiments on the carotid plaque dataset collected fromthreemedical centers demonstrate that the DCLCN achieves commendable performance and robustness.展开更多
A detector's nondestructive readout mode allows its pixels to be read multiple times during integration,enabling generation of a series of"up-the-ramp"images that continuously accumulate photons between ...A detector's nondestructive readout mode allows its pixels to be read multiple times during integration,enabling generation of a series of"up-the-ramp"images that continuously accumulate photons between successive frames.Because noise is correlated across these images,optimal stacking generally requires the images to be weighted unequally to achieve the best possible target signal-to-noise ratio(SNR).Objects in the sky present wildly varied brightness characteristics,and the counts in individual pixels of the same object can also span wide ranges.Therefore,a single set of weights cannot be optimal in all cases.To ensure that the stacked image is easily calibratable,we apply the same weight to all pixels within the same frame.In practice,results for high-SNR cases degraded only slightly when we used weights derived for low-SNR cases,whereas the low-SNR cases remained more sensitive to the weights.Therefore,we propose a quasi-optimal stacking method that maximizes the stacked SNR for the case where the RSN=1 per pixel in the last frame and use simulated data to demonstrate that this approach enhances the SNR more strongly than the equal-weight stacking and ramp fitting methods.Furthermore,we estimate the improvements in the limiting magnitudes for the China Space Station Telescope using the proposed method.When compared with the conventional readout mode,which is equivalent to selecting the last frame from the nondestructive readout,stacking 30 up-the-ramp images can improve the limiting magnitude by approximately 0.5 mag for the telescope's near-infrared observations,effectively reducing readout noise by approximately 62%.展开更多
In the task of classifying massive celestial data,the accurate classification of galaxies,stars,and quasars usually relies on spectral labels.However,spectral data account for only a small fraction of all astronomical...In the task of classifying massive celestial data,the accurate classification of galaxies,stars,and quasars usually relies on spectral labels.However,spectral data account for only a small fraction of all astronomical observation data,and the target source classification information in vast photometric data has not been accurately measured.To address this,we propose a novel deep learning-based algorithm,YL8C4Net,for the automatic detection and classification of target sources in photometric images.This algorithm combines the YOLOv8 detection network with the Conv4Net classification network.Additionally,we propose a novel magnitude-based labeling method for target source annotation.In the performance evaluation,the YOLOv8 achieves impressive performance with average precision scores of 0.824 for AP@0.5 and 0.795 for AP@0.5:0.95.Meanwhile,the constructed Conv4Net attains an accuracy of 0.8895.Overall,YL8C4Net offers the advantages of fewer parameters,faster processing speed,and higher classification accuracy,making it particularly suitable for large-scale data processing tasks.Furthermore,we employed the YL8C4Net model to conduct target source detection and classification on photometric images from 20 sky regions in SDSS-DR17.As a result,a catalog containing about 9.39 million target source classification results has been preliminarily constructed,thereby providing valuable reference data for astronomical research.展开更多
This paper provides a comprehensive introduction to the mini-Si Tian Real-time Image Processing pipeline(STRIP)and evaluates its operational performance.The STRIP pipeline is specifically designed for real-time alert ...This paper provides a comprehensive introduction to the mini-Si Tian Real-time Image Processing pipeline(STRIP)and evaluates its operational performance.The STRIP pipeline is specifically designed for real-time alert triggering and light curve generation for transient sources.By applying the STRIP pipeline to both simulated and real observational data of the Mini-Si Tian survey,it successfully identified various types of variable sources,including stellar flares,supernovae,variable stars,and asteroids,while meeting requirements of reduction speed within 5 minutes.For the real observational data set,the pipeline detected one flare event,127 variable stars,and14 asteroids from three monitored sky regions.Additionally,two data sets were generated:one,a real-bogus training data set comprising 218,818 training samples,and the other,a variable star light curve data set with 421instances.These data sets will be used to train machine learning algorithms,which are planned for future integration into STRIP.展开更多
The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a pytho...The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a python-based package(GT-scopy)for data processing and enhancing for giant solar telescopes,with application to the 1.6 m Goode Solar Telescope(GST)at Big Bear Solar Observatory.The objective is to develop a modern data processing software for refining existing data acquisition,processing,and enhancement methodologies to achieve atmospheric effect removal and accurate alignment at the sub-pixel level,particularly within the processing levels 1.0-1.5.In this research,we implemented an integrated and comprehensive data processing procedure that includes image de-rotation,zone-of-interest selection,coarse alignment,correction for atmospheric distortions,and fine alignment at the sub-pixel level with an advanced algorithm.The results demonstrate a significant improvement in image quality,with enhanced visibility of fine solar structures both in sunspots and quiet-Sun regions.The enhanced data processing package developed in this study significantly improves the utility of data obtained from the GST,paving the way for more precise solar research and contributing to a better understanding of solar dynamics.This package can be adapted for other ground-based solar telescopes,such as the Daniel K.Inouye Solar Telescope(DKIST),the European Solar Telescope(EST),and the 8 m Chinese Giant Solar Telescope,potentially benefiting the broader solar physics community.展开更多
In recent years,camouflage technology has evolved from single-spectral-band applications to multifunctional and multispectral implementations.Hyperspectral imaging has emerged as a powerful technique for target identi...In recent years,camouflage technology has evolved from single-spectral-band applications to multifunctional and multispectral implementations.Hyperspectral imaging has emerged as a powerful technique for target identification due to its capacity to capture both spectral and spatial information.The advancement of imaging spectroscopy technology has significantly enhanced reconnaissance capabilities,offering substantial advantages in camouflaged target classification and detection.However,the increasing spectral similarity between camouflaged targets and their backgrounds has significantly compromised detection performance in specific scenarios.Conventional feature extraction methods are often limited to single,shallow spectral or spatial features,failing to extract deep features and consequently yielding suboptimal classification accuracy.To address these limitations,this study proposes an innovative 3D-2D convolutional neural networks architecture incorporating depthwise separable convolution(DSC)and attention mechanisms(AM).The framework first applies dimensionality reduction to hyperspectral images and extracts preliminary spectral-spatial features.It then employs an alternating combination of 3D and 2D convolutions for deep feature extraction.For target classification,the LogSoftmax function is implemented.The integration of depthwise separable convolution not only enhances classification accuracy but also substantially reduces model parameters.Furthermore,the attention mechanisms significantly improve the network's ability to represent multidimensional features.Extensive experiments were conducted on a custom land-based hyperspectral image dataset.The results demonstrate remarkable classification accuracy:98.74%for grassland camouflage,99.13%for dead leaf camouflage and 98.94%for wild grass camouflage.Comparative analysis shows that the proposed framework is outstanding in terms of classification accuracy and robustness for camouflage target classification.展开更多
As a pathfinder of the SiTian project,the Mini-SiTian(MST)Array,employed three commercial CMOS cameras,represents a next-generation,cost-effective optical time-domain survey project.This paper focuses primarily on the...As a pathfinder of the SiTian project,the Mini-SiTian(MST)Array,employed three commercial CMOS cameras,represents a next-generation,cost-effective optical time-domain survey project.This paper focuses primarily on the precise data processing pipeline designed for wide-field,CMOS-based devices,including the removal of instrumental effects,astrometry,photometry,and flux calibration.When applying this pipeline to approximately3000 observations taken in the Field 02(f02)region by MST,the results demonstrate a remarkable astrometric precision of approximately 70–80 mas(about 0.1 pixel),an impressive calibration accuracy of approximately1 mmag in the MST zero points,and a photometric accuracy of about 4 mmag for bright stars.Our studies demonstrate that MST CMOS can achieve photometric accuracy comparable to that of CCDs,highlighting the feasibility of large-scale CMOS-based optical time-domain surveys and their potential applications for cost optimization in future large-scale time-domain surveys,like the SiTian project.展开更多
Against the backdrop of massive sky survey data,the automated detection,classification,and parameter computation of targets have emerged as critical areas demanding urgent breakthroughs.However,in detection and classi...Against the backdrop of massive sky survey data,the automated detection,classification,and parameter computation of targets have emerged as critical areas demanding urgent breakthroughs.However,in detection and classification tasks,model accuracy is often constrained by issues such as small target sizes and insufficient feature information.To address this challenge,we innovatively constructs a fully automated astronomical image analysis pipeline that combines point source detection and classification,galaxy morphological classification,and parameter computation,forming an end-to-end solution.This pipeline achieves automated detection and morphological classification of both point sources and extended sources,and it is also able to compute the basic parameters of galaxy targets.The pipeline first accomplishes the detection and localization of target sources using the YOLOv9 model,and then leverages the optimized ResNet-AE model to initially categorize the detected targets into three major classes:stars,quasars,and galaxies.To tackle the problem of small sizes in some galaxy targets,we filtered out samples with larger sizes and distinct contours.Drawing on morphological characteristics,these samples were further classified into six categories via the DenseNet-SE4 model:barred spiral galaxies,cigar galaxies,elliptical galaxies,intermediate galaxies,spiral galaxies,and irregular galaxies.Following this classification,parameter computation was conducted on the targets.Experimental results show that the detection model has achieved better performance than previous studies,with a mean average precision of 85.20%at Intersection over Union values ranging from 0.5 to 0.95.Both classification models also reached an accuracy of over 85%on the test set.Compared with classical CNN networks,these two classification models boast higher precision,and the computation of target parameters has also yielded reliable outcomes.Experiments verify that this pipeline can act as a supplementary tool for astronomical image processing and be applied to data mining and analysis work in sky surveys.展开更多
This article proposes a novel method to fuse infrared and visible light images based on region segmentation. Region segmen-tation is used to determine important regions and background information in the input image. T...This article proposes a novel method to fuse infrared and visible light images based on region segmentation. Region segmen-tation is used to determine important regions and background information in the input image. The non-subsampled contourlet transform (NSCT) provides a flexible multiresolution,local and directional image expansion,and also a sparse representation for two-dimensional (2-D) piecewise smooth signal building images,and then different fusion rules are applied to fuse the NSCT coefficients fo...展开更多
In order to obtain good welding quality, it is necessary to apply quality control because there are many influencing factors in laser welding process. The key to realize welding quality control is to obtain the qualit...In order to obtain good welding quality, it is necessary to apply quality control because there are many influencing factors in laser welding process. The key to realize welding quality control is to obtain the quality information. Abundant weld quality information is contained in weld pool and keyhole. Aiming at Nd:YAG laser welding of stainless steel, a coaxial visual sensing system was constructed. The images of weld pool and keyhole were obtained. Based on the gray character of weld pool and keyhole in images, an image processing algorithm was designed. The search start point and search criteria of weld pool and keyhole edge were determined respectively.展开更多
基金financial support from the Brazilian Federal Agency for Support and Evaluation of Graduate Education(Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior—CAPES,scholarship process no BEX 0506/15-0)the Brazilian National Agency of Petroleum,Natural Gas and Biofuels(Agencia Nacional do Petroleo,Gas Natural e Biocombustiveis—ANP),in cooperation with the Brazilian Financier of Studies and Projects(Financiadora de Estudos e Projetos—FINEP)the Brazilian Ministry of Science,Technology and Innovation(Ministério da Ciencia,Tecnologia e Inovacao—MCTI)through the ANP’s Human Resources Program of the State University of Sao Paulo(Universidade Estadual Paulista—UNESP)for the Oil and Gas Sector PRH-ANP/MCTI no 48(PRH48).
文摘Important in many different sectors of the industry, the determination of stream velocity has become more and more important due to measurements precision necessity, in order to determine the right production rates, determine the volumetric production of undesired fluid, establish automated controls based on these measurements avoiding over-flooding or over-production, guaranteeing accurate predictive maintenance, etc. Difficulties being faced have been the determination of the velocity of specific fluids embedded in some others, for example, determining the gas bubbles stream velocity flowing throughout liquid fluid phase. Although different and already applicable methods have been researched and already implemented within the industry, a non-intrusive automated way of providing those stream velocities has its importance, and may have a huge impact in projects budget. Knowing the importance of its determination, this developed script uses a methodology of breaking-down real-time videos media into frame images, analyzing by pixel correlations possible superposition matches for further gas bubbles stream velocity estimation. In raw sense, the script bases itself in functions and procedures already available in MatLab, which can be used for image processing and treatments, allowing the methodology to be implemented. Its accuracy after the running test was of around 97% (ninety-seven percent);the raw source code with comments had almost 3000 (three thousand) characters;and the hardware placed for running the code was an Intel Core Duo 2.13 [Ghz] and 2 [Gb] RAM memory capable workstation. Even showing good results, it could be stated that just the end point correlations were actually getting to the final solution. So that, making use of self-learning functions or neural network, one could surely enhance the capability of the application to be run in real-time without getting exhaust by iterative loops.
基金This project was supported by the National Natural Science Foundation of China (No. 49831060).
文摘Using the Radon transform and morphological image processing, an algorithm for ship's wake detection in the SAR (synthetic aperture radar) image is developed. Being manipulated in the Radon space to invert the gray-level and binary images, the linear texture of ship wake in oceanic clutter can be well detected. It has been applied to the automatic detection of a moving ship from the SEASAT SAR image. The results show that this algorithm is well robust in a strong noisy background and is not very sensitive to the threshold parameter and the working window size.
基金funded by the National Natural Science Foundation of China(NSFC,Nos.12373086 and 12303082)CAS“Light of West China”Program+2 种基金Yunnan Revitalization Talent Support Program in Yunnan ProvinceNational Key R&D Program of ChinaGravitational Wave Detection Project No.2022YFC2203800。
文摘Attitude is one of the crucial parameters for space objects and plays a vital role in collision prediction and debris removal.Analyzing light curves to determine attitude is the most commonly used method.In photometric observations,outliers may exist in the obtained light curves due to various reasons.Therefore,preprocessing is required to remove these outliers to obtain high quality light curves.Through statistical analysis,the reasons leading to outliers can be categorized into two main types:first,the brightness of the object significantly increases due to the passage of a star nearby,referred to as“stellar contamination,”and second,the brightness markedly decreases due to cloudy cover,referred to as“cloudy contamination.”The traditional approach of manually inspecting images for contamination is time-consuming and labor-intensive.However,we propose the utilization of machine learning methods as a substitute.Convolutional Neural Networks and SVMs are employed to identify cases of stellar contamination and cloudy contamination,achieving F1 scores of 1.00 and 0.98 on a test set,respectively.We also explore other machine learning methods such as ResNet-18 and Light Gradient Boosting Machine,then conduct comparative analyses of the results.
文摘A topic studied in cartography is to make the extraction of cartographic features that provide the update of cartographic maps more easily. For this reason many automatic routines were created with the intent to perform the features extraction. Despite of all studies about this, some features cannot be found by the algorithm or it can extract some pixels unduly. So the current article aims to show the results with the software development that uses the original and reference image to calculate some statistics about the extraction process. Furthermore, the calculated statistics can be used to evaluate the extraction process.
文摘This paper presents a method for determining the percentage of coarse aggregate in concrete specimens by image processing. The test pieces were produced with the aim of obtaining images of their cross sections through a scanner table. In order to increase the contrast between mortar and coarse aggregate the sliced surfaces were treated with the phenolphthale in solution. The images obtained in the scanner were processed in a program developed with MATLAB (matrix laboratory). The average coarse aggregate in each section and the mean of coarse aggregate per test body were calculated. With the results, it was revealed that the method returned satisfying results when compared to the original trace of the concrete.
文摘In digital signal processing,image enhancement or image denoising are challenging task to preserve pixel quality.There are several approaches from conventional to deep learning that are used to resolve such issues.But they still face challenges in terms of computational requirements,overfitting and generalization issues,etc.To resolve such issues,optimization algorithms provide greater control and transparency in designing digital filters for image enhancement and denoising.Therefore,this paper presented a novel denoising approach for medical applications using an Optimized Learning⁃based Multi⁃level discrete Wavelet Cascaded Convolutional Neural Network(OLMWCNN).In this approach,the optimal filter parameters are identified to preserve the image quality after denoising.The performance and efficiency of the OLMWCNN filter are evaluated,demonstrating significant progress in denoising medical images while overcoming the limitations of conventional methods.
文摘Osteosarcomas are malignant neoplasms derived from undifferentiated osteogenic mesenchymal cells. It causes severe and permanent damage to human tissue and has a high mortality rate. The condition has the capacity to occur in any bone;however, it often impacts long bones like the arms and legs. Prompt identification and prompt intervention are essential for augmenting patient longevity. However, the intricate composition and erratic placement of osteosarcoma provide difficulties for clinicians in accurately determining the scope of the afflicted area. There is a pressing requirement for developing an algorithm that can automatically detect bone tumors with tremendous accuracy. Therefore, in this study, we proposed a novel feature extractor framework associated with a supervised three-class XGBoost algorithm for the detection of osteosarcoma in whole slide histopathology images. This method allows for quicker and more effective data analysis. The first step involves preprocessing the imbalanced histopathology dataset, followed by augmentation and balancing utilizing two techniques: SMOTE and ADASYN. Next, a unique feature extraction framework is used to extract features, which are then inputted into the supervised three-class XGBoost algorithm for classification into three categories: non-tumor, viable tumor, and non-viable tumor. The experimental findings indicate that the proposed model exhibits superior efficiency, accuracy, and a more lightweight design in comparison to other current models for osteosarcoma detection.
基金supported by grants fromthe North China University of Technology Research Start-Up Fund(11005136024XN147-14)and(110051360024XN151-97)Guangzhou Development Zone Science and Technology Project(2023GH02)+4 种基金the National Key R&D Program of China(2021YFE0201100 and 2022YFA1103401 to Juntao Gao)National Natural Science Foundation of China(981890991 to Juntao Gao)Beijing Municipal Natural Science Foundation(Z200021 to Juntao Gao)CAS Interdisciplinary Innovation Team(JCTD-2020-04 to Juntao Gao)0032/2022/A,by Macao FDCT,and MYRG2022-00271-FST.
文摘Hematoxylin and Eosin(H&E)images,popularly used in the field of digital pathology,often pose challenges due to their limited color richness,hindering the differentiation of subtle cell features crucial for accurate classification.Enhancing the visibility of these elusive cell features helps train robust deep-learning models.However,the selection and application of image processing techniques for such enhancement have not been systematically explored in the research community.To address this challenge,we introduce Salient Features Guided Augmentation(SFGA),an approach that strategically integrates machine learning and image processing.SFGA utilizes machine learning algorithms to identify crucial features within cell images,subsequently mapping these features to appropriate image processing techniques to enhance training images.By emphasizing salient features and aligning them with corresponding image processing methods,SFGA is designed to enhance the discriminating power of deep learning models in cell classification tasks.Our research undertakes a series of experiments,each exploring the performance of different datasets and data enhancement techniques in classifying cell types,highlighting the significance of data quality and enhancement in mitigating overfitting and distinguishing cell characteristics.Specifically,SFGA focuses on identifying tumor cells from tissue for extranodal extension detection,with the SFGA-enhanced dataset showing notable advantages in accuracy.We conducted a preliminary study of five experiments,among which the accuracy of the pleomorphism experiment improved significantly from 50.81%to 95.15%.The accuracy of the other four experiments also increased,with improvements ranging from 3 to 43 percentage points.Our preliminary study shows the possibilities to enhance the diagnostic accuracy of deep learning models and proposes a systematic approach that could enhance cancer diagnosis,contributing as a first step in using SFGA in medical image enhancement.
基金funded by Deanship of Graduate Studies and Scientific Research at Najran University for supporting the research project through the Nama’a program,with the project code NU/GP/MRC/13/771-4.
文摘Breast cancer remains one of the most pressing global health concerns,and early detection plays a crucial role in improving survival rates.Integrating digital mammography with computational techniques and advanced image processing has significantly enhanced the ability to identify abnormalities.However,existing methodologies face persistent challenges,including low image contrast,noise interference,and inaccuracies in segmenting regions of interest.To address these limitations,this study introduces a novel computational framework for analyzing mammographic images,evaluated using the Mammographic Image Analysis Society(MIAS)dataset comprising 322 samples.The proposed methodology follows a structured three-stage approach.Initially,mammographic scans are classified using the Breast Imaging Reporting and Data System(BI-RADS),ensuring systematic and standardized image analysis.Next,the pectoral muscle,which can interfere with accurate segmentation,is effectively removed to refine the region of interest(ROI).The final stage involves an advanced image pre-processing module utilizing Independent Component Analysis(ICA)to enhance contrast,suppress noise,and improve image clarity.Following these enhancements,a robust segmentation technique is employed to delineated abnormal regions.Experimental results validate the efficiency of the proposed framework,demonstrating a significant improvement in the Effective Measure of Enhancement(EME)and a 3 dB increase in Peak Signal-to-Noise Ratio(PSNR),indicating superior image quality.The model also achieves an accuracy of approximately 97%,surpassing contemporary techniques evaluated on the MIAS dataset.Furthermore,its ability to process mammograms across all BI-RADS categories highlights its adaptability and reliability for clinical applications.This study presents an advanced and dependable computational framework for mammographic image analysis,effectively addressing critical challenges in noise reduction,contrast enhancement,and segmentation precision.The proposed approach lays the groundwork for seamless integration into computer-aided diagnostic(CAD)systems,with the potential to significantly enhance early breast cancer detection and contribute to improved patient outcomes.
基金Supported by National Natural Science Foundation of China(Grant No.52175528)。
文摘In micro milling machining,tool wear directly affects workpiece quality and accuracy,making effective tool wear monitoring a key factor in ensuring product integrity.The use of machine vision-based methods can provide an intuitive and efficient representation of tool wear conditions.However,micro milling tools have non-flat flanks,thin coatings can peel off,and spindle orientation is uncertain during downtime.These factors result in low pixel values,uneven illumination,and arbitrary tool position.To address this,we propose an image-based tool wear monitoring method.It combines multiple algorithms to restore lost pixels due to uneven illumination during segmentation and accurately extract wear areas.Experimental results demonstrate that the proposed algorithm exhibits high robustness to such images,effectively addressing the effects of illumination and spindle orientation.Additionally,the algorithm has low complexity,fast execution time,and significantly reduces the detection time in situ.
基金supported by Shanghai Technical Service Computing Center of Science and Engineering,Shanghai University.
文摘Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerability.Therefore,classifying plaque risk constitutes one of themost critical tasks in the clinicalmanagement of this condition.While classification models derived from individual medical centers have been extensively investigated,these singlecenter models often fail to generalize well to multi-center data due to variations in ultrasound images caused by differences in physician expertise and equipment.To address this limitation,a Dual-Classifier Label Correction Networkmodel(DCLCN)is proposed for the classification of carotid plaque ultrasound images acrossmultiplemedical centers.TheDCLCNdesigns amulti-center domain adaptationmodule that leverages a dual-classifier strategy to extract knowledge from both source and target centers,thereby reducing feature discrepancies through a domain adaptation layer.Additionally,to mitigate the impact of image noise,a label modeling and correction module is introduced to generate pseudo-labels for the target centers and iteratively refine them using an end-to-end correction mechanism.Experiments on the carotid plaque dataset collected fromthreemedical centers demonstrate that the DCLCN achieves commendable performance and robustness.
基金supported by the National Key R&D Program of China (2022YFF0503400)the National Natural Science Foundation of China grant (U1931208)China Manned Space Program through its Space Application System.
文摘A detector's nondestructive readout mode allows its pixels to be read multiple times during integration,enabling generation of a series of"up-the-ramp"images that continuously accumulate photons between successive frames.Because noise is correlated across these images,optimal stacking generally requires the images to be weighted unequally to achieve the best possible target signal-to-noise ratio(SNR).Objects in the sky present wildly varied brightness characteristics,and the counts in individual pixels of the same object can also span wide ranges.Therefore,a single set of weights cannot be optimal in all cases.To ensure that the stacked image is easily calibratable,we apply the same weight to all pixels within the same frame.In practice,results for high-SNR cases degraded only slightly when we used weights derived for low-SNR cases,whereas the low-SNR cases remained more sensitive to the weights.Therefore,we propose a quasi-optimal stacking method that maximizes the stacked SNR for the case where the RSN=1 per pixel in the last frame and use simulated data to demonstrate that this approach enhances the SNR more strongly than the equal-weight stacking and ramp fitting methods.Furthermore,we estimate the improvements in the limiting magnitudes for the China Space Station Telescope using the proposed method.When compared with the conventional readout mode,which is equivalent to selecting the last frame from the nondestructive readout,stacking 30 up-the-ramp images can improve the limiting magnitude by approximately 0.5 mag for the telescope's near-infrared observations,effectively reducing readout noise by approximately 62%.
基金supported by the National Natural Science Foundation of China (NSFC, Grant No. U1731128)
文摘In the task of classifying massive celestial data,the accurate classification of galaxies,stars,and quasars usually relies on spectral labels.However,spectral data account for only a small fraction of all astronomical observation data,and the target source classification information in vast photometric data has not been accurately measured.To address this,we propose a novel deep learning-based algorithm,YL8C4Net,for the automatic detection and classification of target sources in photometric images.This algorithm combines the YOLOv8 detection network with the Conv4Net classification network.Additionally,we propose a novel magnitude-based labeling method for target source annotation.In the performance evaluation,the YOLOv8 achieves impressive performance with average precision scores of 0.824 for AP@0.5 and 0.795 for AP@0.5:0.95.Meanwhile,the constructed Conv4Net attains an accuracy of 0.8895.Overall,YL8C4Net offers the advantages of fewer parameters,faster processing speed,and higher classification accuracy,making it particularly suitable for large-scale data processing tasks.Furthermore,we employed the YL8C4Net model to conduct target source detection and classification on photometric images from 20 sky regions in SDSS-DR17.As a result,a catalog containing about 9.39 million target source classification results has been preliminarily constructed,thereby providing valuable reference data for astronomical research.
基金supported from the Strategic Pioneer Program of the Astronomy Large-Scale Scientific FacilityChinese Academy of Sciences and the Science and Education Integration Funding of University of Chinese Academy of Sciences+9 种基金the supports from the National Key Basic R&D Program of China via 2023YFA1608303the Strategic Priority Research Program of the Chinese Academy of Sciences(XDB0550103)the supports from the Strategic Priority Research Program of the Chinese Academy of Sciences under grant No.XDB0550000the National Natural Science Foundation of China(NSFC,grant Nos.12422303 and12261141690)the supports from the NSFC(grant No.12403024)supports from the NSFC through grant Nos.11988101 and 11933004the Postdoctoral Fellowship Program of CPSF under grant No.GZB20240731the Young Data Scientist Project of the National Astronomical Data Centerthe China Post-doctoral Science Foundation(No.2023M743447)supports from the New Cornerstone Science Foundation through the New Cornerstone Investigator Program and the XPLORER PRIZE。
文摘This paper provides a comprehensive introduction to the mini-Si Tian Real-time Image Processing pipeline(STRIP)and evaluates its operational performance.The STRIP pipeline is specifically designed for real-time alert triggering and light curve generation for transient sources.By applying the STRIP pipeline to both simulated and real observational data of the Mini-Si Tian survey,it successfully identified various types of variable sources,including stellar flares,supernovae,variable stars,and asteroids,while meeting requirements of reduction speed within 5 minutes.For the real observational data set,the pipeline detected one flare event,127 variable stars,and14 asteroids from three monitored sky regions.Additionally,two data sets were generated:one,a real-bogus training data set comprising 218,818 training samples,and the other,a variable star light curve data set with 421instances.These data sets will be used to train machine learning algorithms,which are planned for future integration into STRIP.
基金supported by the National Natural Science Foundation of China(NSFC,12173012 and 12473050)the Guangdong Natural Science Funds for Distinguished Young Scholars(2023B1515020049)+2 种基金the Shenzhen Science and Technology Project(JCYJ20240813104805008)the Shenzhen Key Laboratory Launching Project(No.ZDSYS20210702140800001)the Specialized Research Fund for State Key Laboratory of Solar Activity and Space Weather。
文摘The increasing demand for high-resolution solar observations has driven the development of advanced data processing and enhancement techniques for ground-based solar telescopes.This study focuses on developing a python-based package(GT-scopy)for data processing and enhancing for giant solar telescopes,with application to the 1.6 m Goode Solar Telescope(GST)at Big Bear Solar Observatory.The objective is to develop a modern data processing software for refining existing data acquisition,processing,and enhancement methodologies to achieve atmospheric effect removal and accurate alignment at the sub-pixel level,particularly within the processing levels 1.0-1.5.In this research,we implemented an integrated and comprehensive data processing procedure that includes image de-rotation,zone-of-interest selection,coarse alignment,correction for atmospheric distortions,and fine alignment at the sub-pixel level with an advanced algorithm.The results demonstrate a significant improvement in image quality,with enhanced visibility of fine solar structures both in sunspots and quiet-Sun regions.The enhanced data processing package developed in this study significantly improves the utility of data obtained from the GST,paving the way for more precise solar research and contributing to a better understanding of solar dynamics.This package can be adapted for other ground-based solar telescopes,such as the Daniel K.Inouye Solar Telescope(DKIST),the European Solar Telescope(EST),and the 8 m Chinese Giant Solar Telescope,potentially benefiting the broader solar physics community.
文摘In recent years,camouflage technology has evolved from single-spectral-band applications to multifunctional and multispectral implementations.Hyperspectral imaging has emerged as a powerful technique for target identification due to its capacity to capture both spectral and spatial information.The advancement of imaging spectroscopy technology has significantly enhanced reconnaissance capabilities,offering substantial advantages in camouflaged target classification and detection.However,the increasing spectral similarity between camouflaged targets and their backgrounds has significantly compromised detection performance in specific scenarios.Conventional feature extraction methods are often limited to single,shallow spectral or spatial features,failing to extract deep features and consequently yielding suboptimal classification accuracy.To address these limitations,this study proposes an innovative 3D-2D convolutional neural networks architecture incorporating depthwise separable convolution(DSC)and attention mechanisms(AM).The framework first applies dimensionality reduction to hyperspectral images and extracts preliminary spectral-spatial features.It then employs an alternating combination of 3D and 2D convolutions for deep feature extraction.For target classification,the LogSoftmax function is implemented.The integration of depthwise separable convolution not only enhances classification accuracy but also substantially reduces model parameters.Furthermore,the attention mechanisms significantly improve the network's ability to represent multidimensional features.Extensive experiments were conducted on a custom land-based hyperspectral image dataset.The results demonstrate remarkable classification accuracy:98.74%for grassland camouflage,99.13%for dead leaf camouflage and 98.94%for wild grass camouflage.Comparative analysis shows that the proposed framework is outstanding in terms of classification accuracy and robustness for camouflage target classification.
基金supported by the National Key Basic R&D Program of China via 2023YFA1608303the Strategic Priority Research Program of the Chinese Academy of Sciences(XDB0550103)+3 种基金the National Science Foundation of China 12422303,12403024,12222301,12173007,and 12261141690the Postdoctoral Fellowship Program of CPSF under grant Number GZB20240731the Young Data Scientist Project of the National Astronomical Data Center,and the China Post-doctoral Science Foundation No.2023M743447support from the NSFC through grant No.12303039 and No.12261141690.
文摘As a pathfinder of the SiTian project,the Mini-SiTian(MST)Array,employed three commercial CMOS cameras,represents a next-generation,cost-effective optical time-domain survey project.This paper focuses primarily on the precise data processing pipeline designed for wide-field,CMOS-based devices,including the removal of instrumental effects,astrometry,photometry,and flux calibration.When applying this pipeline to approximately3000 observations taken in the Field 02(f02)region by MST,the results demonstrate a remarkable astrometric precision of approximately 70–80 mas(about 0.1 pixel),an impressive calibration accuracy of approximately1 mmag in the MST zero points,and a photometric accuracy of about 4 mmag for bright stars.Our studies demonstrate that MST CMOS can achieve photometric accuracy comparable to that of CCDs,highlighting the feasibility of large-scale CMOS-based optical time-domain surveys and their potential applications for cost optimization in future large-scale time-domain surveys,like the SiTian project.
基金supported by the National Natural Science Foundation of China(NSFC,grant No.U1731128)。
文摘Against the backdrop of massive sky survey data,the automated detection,classification,and parameter computation of targets have emerged as critical areas demanding urgent breakthroughs.However,in detection and classification tasks,model accuracy is often constrained by issues such as small target sizes and insufficient feature information.To address this challenge,we innovatively constructs a fully automated astronomical image analysis pipeline that combines point source detection and classification,galaxy morphological classification,and parameter computation,forming an end-to-end solution.This pipeline achieves automated detection and morphological classification of both point sources and extended sources,and it is also able to compute the basic parameters of galaxy targets.The pipeline first accomplishes the detection and localization of target sources using the YOLOv9 model,and then leverages the optimized ResNet-AE model to initially categorize the detected targets into three major classes:stars,quasars,and galaxies.To tackle the problem of small sizes in some galaxy targets,we filtered out samples with larger sizes and distinct contours.Drawing on morphological characteristics,these samples were further classified into six categories via the DenseNet-SE4 model:barred spiral galaxies,cigar galaxies,elliptical galaxies,intermediate galaxies,spiral galaxies,and irregular galaxies.Following this classification,parameter computation was conducted on the targets.Experimental results show that the detection model has achieved better performance than previous studies,with a mean average precision of 85.20%at Intersection over Union values ranging from 0.5 to 0.95.Both classification models also reached an accuracy of over 85%on the test set.Compared with classical CNN networks,these two classification models boast higher precision,and the computation of target parameters has also yielded reliable outcomes.Experiments verify that this pipeline can act as a supplementary tool for astronomical image processing and be applied to data mining and analysis work in sky surveys.
基金National Natural Science Foundation of China (60802084)
文摘This article proposes a novel method to fuse infrared and visible light images based on region segmentation. Region segmen-tation is used to determine important regions and background information in the input image. The non-subsampled contourlet transform (NSCT) provides a flexible multiresolution,local and directional image expansion,and also a sparse representation for two-dimensional (2-D) piecewise smooth signal building images,and then different fusion rules are applied to fuse the NSCT coefficients fo...
基金Project (10776020) supported by the Joint Foundation of the National Natural Science Foundation of China and China Academy of Engineering Physics
文摘In order to obtain good welding quality, it is necessary to apply quality control because there are many influencing factors in laser welding process. The key to realize welding quality control is to obtain the quality information. Abundant weld quality information is contained in weld pool and keyhole. Aiming at Nd:YAG laser welding of stainless steel, a coaxial visual sensing system was constructed. The images of weld pool and keyhole were obtained. Based on the gray character of weld pool and keyhole in images, an image processing algorithm was designed. The search start point and search criteria of weld pool and keyhole edge were determined respectively.