Early detection of Forest and Land Fires(FLF)is essential to prevent the rapid spread of fire as well as minimize environmental damage.However,accurate detection under real-world conditions,such as low light,haze,and ...Early detection of Forest and Land Fires(FLF)is essential to prevent the rapid spread of fire as well as minimize environmental damage.However,accurate detection under real-world conditions,such as low light,haze,and complex backgrounds,remains a challenge for computer vision systems.This study evaluates the impact of three image enhancement techniques—Histogram Equalization(HE),Contrast Limited Adaptive Histogram Equalization(CLAHE),and a hybrid method called DBST-LCM CLAHE—on the performance of the YOLOv11 object detection model in identifying fires and smoke.The D-Fire dataset,consisting of 21,527 annotated images captured under diverse environmental scenarios and illumination levels,was used to train and evaluate the model.Each enhancement method was applied to the dataset before training.Model performance was assessed using multiple metrics,including Precision,Recall,mean Average Precision at 50%IoU(mAP50),F1-score,and visual inspection through bounding box results.Experimental results show that all three enhancement techniques improved detection performance.HE yielded the highest mAP50 score of 0.771,along with a balanced precision of 0.784 and recall of 0.703,demonstrating strong generalization across different conditions.DBST-LCM CLAHE achieved the highest Precision score of 79%,effectively reducing false positives,particularly in scenes with dispersed smoke or complex textures.CLAHE,with slightly lower overall metrics,contributed to improved local feature detection.Each technique showed distinct advantages:HE enhanced global contrast;CLAHE improved local structure visibility;and DBST-LCM CLAHE provided an optimal balance through dynamic block sizing and local contrast preservation.These results underline the importance of selecting preprocessing methods according to detection priorities,such as minimizing false alarms or maximizing completeness.This research does not propose a new model architecture but rather benchmarks a recent lightweight detector,YOLOv11,combined with image enhancement strategies for practical deployment in FLF monitoring.The findings support the integration of preprocessing techniques to improve detection accuracy,offering a foundation for real-time FLF detection systems on edge devices or drones,particularly in regions like Indonesia.展开更多
AIM:To find the effective contrast enhancement method on retinal images for effective segmentation of retinal features.METHODS:A novel image preprocessing method that used neighbourhood-based improved contrast limited...AIM:To find the effective contrast enhancement method on retinal images for effective segmentation of retinal features.METHODS:A novel image preprocessing method that used neighbourhood-based improved contrast limited adaptive histogram equalization(NICLAHE)to improve retinal image contrast was suggested to aid in the accurate identification of retinal disorders and improve the visibility of fine retinal structures.Additionally,a minimal-order filter was applied to effectively denoise the images without compromising important retinal structures.The novel NICLAHE algorithm was inspired by the classical CLAHE algorithm,but enhanced it by selecting the clip limits and tile sized in a dynamical manner relative to the pixel values in an image as opposed to using fixed values.It was evaluated on the Drive and high-resolution fundus(HRF)datasets on conventional quality measures.RESULTS:The new proposed preprocessing technique was applied to two retinal image databases,Drive and HRF,with four quality metrics being,root mean square error(RMSE),peak signal to noise ratio(PSNR),root mean square contrast(RMSC),and overall contrast.The technique performed superiorly on both the data sets as compared to the traditional enhancement methods.In order to assess the compatibility of the method with automated diagnosis,a deep learning framework named ResNet was applied in the segmentation of retinal blood vessels.Sensitivity,specificity,precision and accuracy were used to analyse the performance.NICLAHE–enhanced images outperformed the traditional techniques on both the datasets with improved accuracy.CONCLUSION:NICLAHE provides better results than traditional methods with less error and improved contrastrelated values.These enhanced images are subsequently measured by sensitivity,specificity,precision,and accuracy,which yield a better result in both datasets.展开更多
Eigenstructure-based coherence attributes are efficient and mature techniques for large-scale fracture detection. However, in horizontally bedded and continuous strata, buried fractures in high grayscale value zones a...Eigenstructure-based coherence attributes are efficient and mature techniques for large-scale fracture detection. However, in horizontally bedded and continuous strata, buried fractures in high grayscale value zones are difficult to detect. Furthermore, middleand small-scale fractures in fractured zones where migration image energies are usually not concentrated perfectly are also hard to detect because of the fuzzy, clouded shadows owing to low grayscale values. A new fracture enhancement method combined with histogram equalization is proposed to solve these problems. With this method, the contrast between discontinuities and background in coherence images is increased, linear structures are highlighted by stepwise adjustment of the threshold of the coherence image, and fractures are detected at different scales. Application of the method shows that it can also improve fracture cognition and accuracy.展开更多
A new improved algorithm of histogram equalization was discussed and actualized by analyzing the traditional algorithm. This improved algorithm has better effect than the traditional one, especially it is used to proc...A new improved algorithm of histogram equalization was discussed and actualized by analyzing the traditional algorithm. This improved algorithm has better effect than the traditional one, especially it is used to process poor quality images.展开更多
Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foregro...Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foreground pixels and background pixels independently is proposed and investigated.Since details are mainly contained in the foreground,the weighted coupling of histogram equalization and Laplace transform were adopted to balance contrast enhancement and details preservation.The weighting factors of image foreground and background were determined by the amount of their respective information.The proposed method was conducted to images acquired from CVG⁃UGR and US⁃SIPI image databases and then compared with other methods such as clipping histogram spikes,histogram addition,and non⁃linear transformation to verify its validity.Results show that the proposed algorithm can effectively enhance the contrast without introducing distortions,and preserve the mean brightness and details well at the same time.展开更多
The traditional grayscale histogram of an input image is constructed by simply counting its pixels.Hence,the classical his-togram equalization(HE)technique has fundamental defects such as overenhancement,underenhancem...The traditional grayscale histogram of an input image is constructed by simply counting its pixels.Hence,the classical his-togram equalization(HE)technique has fundamental defects such as overenhancement,underenhancement,and brightness drifting.This paper proposes an advanced HE based on a hybrid saliency map and a novel visual prior to addressing the defects mentioned above.First,the texture saliency map and attention weight map are constructed based on the texture saliency and visual attention mechanism.Later,the hybrid saliency map that is obtained by fusing the texture and attention weight maps is used to derive the saliency histogram.Then,a novel visual prior,the narrow dynamic range prior(NDP),is proposed,and the saliency histogram is modified by calculating the optimal parameter in combination with a binary optimization model.Next,the cumulative distribution function(CDF)is rectified to control the brightness.Finally,the hybrid saliency map is applied again for local enhancement.Compared with several state-of-the-art algorithms qualitatively and quantitatively,the proposed algorithm effectively improves the contrast of the image,generates better sub-jective visual perception,and presents better performance broadly.展开更多
We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using output...We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using outputs of selection or clustering algorithms from the background speakers' utterances. The proposed approach is used as a feature normalization method for building histograms when there are insufficient input utterance samples.In addition, the proposed method is used as an i-vector normalization method in an i-vector-based probabilistic linear discriminant analysis(PLDA) system, which is the current state-of-the-art for speaker verification. The ranks of sample values for histogram equalization are estimated in ascending order from both the input utterances and the supplement set. New ranks are obtained by computing the sum of different kinds of ranks. Subsequently, the proposed method determines the cumulative distribution function of the test utterance using the newly defined ranks. The proposed method is compared with conventional feature normalization methods, such as cepstral mean normalization(CMN), cepstral mean and variance normalization(MVN), histogram equalization(HEQ), and the European Telecommunications Standards Institute(ETSI) advanced front-end methods. In addition, performance is compared for a case in which the greedy selection algorithm is used with fuzzy C-means and K-means algorithms.The YOHO and Electronics and Telecommunications Research Institute(ETRI) databases are used in an evaluation in the feature space. The test sets are simulated by the Opus Vo IP codec. We also use the 2008 National Institute of Standards and Technology(NIST) speaker recognition evaluation(SRE) corpus for the i-vector system. The results of the experimental evaluation demonstrate that the average system performance is improved when the proposed method is used, compared to the conventional feature normalization methods.展开更多
Background:Pneumonia remains a critical global health challenge,manifesting as a severe respiratory infection caused by viruses,bacteria,and fungi.Early detection is paramount for effective treatment,potentially reduc...Background:Pneumonia remains a critical global health challenge,manifesting as a severe respiratory infection caused by viruses,bacteria,and fungi.Early detection is paramount for effective treatment,potentially reducing mortality rates and optimizing healthcare resource allocation.Despite the importance of chest X-ray diagnosis,image analysis presents significant challenges,particularly in regions with limited medical expertise.This study addresses these challenges by proposing a computer-aided diagnosis system leveraging targeted image preprocessing and optimized deep learning techniques.Methods:We systematically evaluated contrast limited adaptive histogram equalization with varying clip limits for preprocessing chest X-ray images,demonstrating its effectiveness in enhancing feature visibility for diagnostic accuracy.Employing a comprehensive dataset of 5,863 X-ray images(1,583 pneumonia-negative,4,280 pneumonia-positive)collected from multiple healthcare facilities,we conducted a comparative analysis of transfer learning with pre-trained models including ResNet50v2,VGG-19,and MobileNetV2.Statistical validation was performed through 5-fold cross-validation.Results:Our results show that the contrast limited adaptive histogram equalization-enhanced approach with ResNet50v2 achieves 93.40%accuracy,outperforming VGG-19(84.90%)and MobileNetV2(89.70%).Statistical validation confirms the significance of these improvements(P<0.01).The development and optimization resulted in a lightweight mobile application(74 KB)providing rapid diagnostic support(1-2 s response time).Conclusion:The proposed approach demonstrates practical applicability in resource-constrained settings,balancing diagnostic accuracy with deployment efficiency,and offers a viable solution for computer-aided pneumonia diagnosis in areas with limited medical expertise.展开更多
To improve image quality under low illumination conditions,a novel low-light image enhancement method is proposed in this paper based on multi-illumination estimation and multi-scale fusion(MIMS).Firstly,the illuminat...To improve image quality under low illumination conditions,a novel low-light image enhancement method is proposed in this paper based on multi-illumination estimation and multi-scale fusion(MIMS).Firstly,the illumination is processed by contrast-limited adaptive histogram equalization(CLAHE),adaptive complementary gamma function(ACG),and adaptive detail preserving S-curve(ADPS),respectively,to obtain three components.Then,the fusion-relevant features,exposure,and color contrast are selected as the weight maps.Subsequently,these components and weight maps are fused through multi-scale to generate enhanced illumination.Finally,the enhanced images are obtained by multiplying the enhanced illumination and reflectance.Compared with existing approaches,this proposed method achieves an average increase of 0.81%and 2.89%in the structural similarity index measurement(SSIM)and peak signal-to-noise ratio(PSNR),and a decrease of 6.17%and 32.61%in the natural image quality evaluator(NIQE)and gradient magnitude similarity deviation(GMSD),respectively.展开更多
Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,gr...Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,green,and blue)color model.The proposed method selects the two highest bins and two lowest bins from the image histogram,performs an equalized number of bidirectional histogram shifting repetitions on each RGB channel while embedding secret data into marked images.The proposed method simultaneously performs both right histogram shifting(RHS)and left histogram shifting(LHS)in each histogram shifting repetition to embed and split the highest bins while combining the lowest bins with their neighbors to achieve histogram equalization(HE).The least maximum number of histograms shifting repetitions among the three RGB channels is used as the default number of histograms shifting repetitions performed to enhance original images.Compared to an existing contrast enhancement method for color images and evaluated with PSNR,SSIM,RCE,and RMBE quality assessment metrics,the experimental results show that the proposed method's enhanced images are visually and qualitatively superior with a more evenly distributed histogram.The proposed method achieves higher embedding capacities and embedding rates in all images,with an average increase in embedding capacity of 52.1%.展开更多
In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization tec...In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization technique and automatically determines the color spectra of geophysical maps. Colors can be properly distributed and visual effects and resolution can be enhanced by the method. The other method is based on the modified Radon transform and gradient calculation and is used to detect and enhance linear features in gravity and magnetic images. The method facilites the detection of line segments in the transform domain. Tests with synthetic images and real data show the methods to be effective in feature enhancement.展开更多
Since image real-time processing requires vast amount of computation and high-speed hardware,it is difficult to be implemented with general microcomputer system. In order to solve the problem,a powerful digital signal...Since image real-time processing requires vast amount of computation and high-speed hardware,it is difficult to be implemented with general microcomputer system. In order to solve the problem,a powerful digital signal processing (DSP) hardware system is proposed,which is able to meet needs of image real-time processing.There are many approaches to enhance infrared image.But only histogram equalization is discussed because it is the most common and effective way.On the basis of histogram equalization principle,the specific procedures implemented in DSP are shown.At last the experimental results are given.展开更多
Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. T...Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. The objective of this work is to implement the image enhancement to gray scale images using different techniques. After the fundamental methods of image enhancement processing are demonstrated, image enhancement algorithms based on space and frequency domains are systematically investigated and compared. The advantage and defect of the above-mentioned algorithms are analyzed. The algorithms of wavelet based image enhancement are also deduced and generalized. Wavelet transform modulus maxima(WTMM) is a method for detecting the fractal dimension of a signal, it is well used for image enhancement. The image techniques are compared by using the mean(μ),standard deviation(?), mean square error(MSE) and PSNR(peak signal to noise ratio). A group of experimental results demonstrate that the image enhancement algorithm based on wavelet transform is effective for image de-noising and enhancement. Wavelet transform modulus maxima method is one of the best methods for image enhancement.展开更多
Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic pro...Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic procedures’performance and forecast accuracy.The disease’s widespread distribution and elevated mortality rate demonstrate its significance in the older-onset and younger-onset age groups.In light of research investigations,it is vital to consider age as one of the key criteria when choosing the subjects.The younger subjects are more susceptible to the perishable side than the older onset.The proposed investigation concentrated on the younger onset.The research used deep learning models and neuroimages to diagnose and categorize the disease at its early stages automatically.The proposed work is executed in three steps.The 3D input images must first undergo image pre-processing using Weiner filtering and Contrast Limited Adaptive Histogram Equalization(CLAHE)methods.The Transfer Learning(TL)models extract features,which are subsequently compressed using cascaded Auto Encoders(AE).The final phase entails using a Deep Neural Network(DNN)to classify the phases of AD.The model was trained and tested to classify the five stages of AD.The ensemble ResNet-18 and sparse autoencoder with DNN model achieved an accuracy of 98.54%.The method is compared to state-of-the-art approaches to validate its efficacy and performance.展开更多
Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images ...Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images are to be processed with image processing techniques and effective classification process is required for appropriate cancer diagnosis.In present scenario of medical data processing,the cancer detection process is very time consuming and exactitude.For that,this paper develops an improved model for lung cancer segmentation and classification using genetic algorithm.In the model,the input CT images are pre-processed with the filters called adaptive median filter and average filter.The filtered images are enhanced with histogram equalization and the ROI(Regions of Interest)cancer tissues are segmented using Guaranteed Convergence Particle Swarm Optimization technique.For classification of images,Probabilistic Neural Networks(PNN)based classification is used.The experimentation is carried out by simulating the model in MATLAB,with the input CT lung images LIDC-IDRI(Lung Image Database Consortium-Image Database Resource Initiative)benchmark Dataset.The results ensure that the proposed model outperforms existing methods with accurate classification results with minimal processing time.展开更多
Images captured outdoor usually degenerate because of the bad weather conditions,among which fog,one of the widespread phenomena,affects the video quality greatly.The physical features of fog make the video blurred an...Images captured outdoor usually degenerate because of the bad weather conditions,among which fog,one of the widespread phenomena,affects the video quality greatly.The physical features of fog make the video blurred and the visible distance shortened,seriously impairing the reliability of the video system.In order to satisfy the requirement of image processing in real-time,the normal distribution curve fitting technology is used to fit the histogram of the sky part and the region growing method is used to segment the region of sky.As for the non-sky part,a method of self-adaptive interpolation to equalize the histogram is adopted to enhance the contrast of the images.Experiment results show that the method works well and will not cause block effect.展开更多
The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the...The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the median filter, histogram equalization and image binarization, methods were used to remove noise and enhance images. Results showed that feature points were clear and accurate after the experiment. This simulation experiment prepared for the follow-up subsequent recognition process.展开更多
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru...Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.展开更多
Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs...Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs, parameters and activations are uniformly quantized, such that the multiplications and additions can be accelerated by bitwise operations. However, distributions of parameters in neural networks are often imbalanced, such that the uniform quantization determined from extremal values may underutilize available bitwidth. In this paper, we propose a novel quantization method that can ensure the balance of distributions of quantized values. Our method first recursively partitions the parameters by percentiles into balanced bins, and then applies uniform quantization. We also introduce computationally cheaper approximations of percentiles to reduce the computation overhead introduced. Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both convolutional neural networks and recurrent neural networks. Experiments on standard datasets including ImageNet and Penn Treebank confirm the effectiveness of our method. On ImageNet, the top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7%, which is superior to the state-of-the-arts of QNNs.展开更多
Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive ...Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive histogram equalization approach(CLAHE)is applied as preprocessing,in order to enhance the visual quality of the images that helps in better segmentation.Then,adaptively regularized kernel-based fuzzy C means(ARKFCM)is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.Findings-The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost.The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient,dice coefficient,precision,Matthews correlation coefficient,f-score and accuracy.The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value,which is better than the existing algorithms.Practical implications-From the experimental analysis,the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm.However,the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.Originality/value-The image preprocessing is carried out using CLAHE algorithm.The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm.In this research,the proposed algorithm has advantages such as independence of clustering parameters,robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.展开更多
基金funded by the Directorate of Research,Technology,and Community Service,Ministry of Higher Education,Science,and Technology of the Republic of Indonesia the Regular Fundamental Research scheme,with grant numbers 001/LL6/PL/AL.04/2025,011/SPK-PFR/RIK/05/2025.
文摘Early detection of Forest and Land Fires(FLF)is essential to prevent the rapid spread of fire as well as minimize environmental damage.However,accurate detection under real-world conditions,such as low light,haze,and complex backgrounds,remains a challenge for computer vision systems.This study evaluates the impact of three image enhancement techniques—Histogram Equalization(HE),Contrast Limited Adaptive Histogram Equalization(CLAHE),and a hybrid method called DBST-LCM CLAHE—on the performance of the YOLOv11 object detection model in identifying fires and smoke.The D-Fire dataset,consisting of 21,527 annotated images captured under diverse environmental scenarios and illumination levels,was used to train and evaluate the model.Each enhancement method was applied to the dataset before training.Model performance was assessed using multiple metrics,including Precision,Recall,mean Average Precision at 50%IoU(mAP50),F1-score,and visual inspection through bounding box results.Experimental results show that all three enhancement techniques improved detection performance.HE yielded the highest mAP50 score of 0.771,along with a balanced precision of 0.784 and recall of 0.703,demonstrating strong generalization across different conditions.DBST-LCM CLAHE achieved the highest Precision score of 79%,effectively reducing false positives,particularly in scenes with dispersed smoke or complex textures.CLAHE,with slightly lower overall metrics,contributed to improved local feature detection.Each technique showed distinct advantages:HE enhanced global contrast;CLAHE improved local structure visibility;and DBST-LCM CLAHE provided an optimal balance through dynamic block sizing and local contrast preservation.These results underline the importance of selecting preprocessing methods according to detection priorities,such as minimizing false alarms or maximizing completeness.This research does not propose a new model architecture but rather benchmarks a recent lightweight detector,YOLOv11,combined with image enhancement strategies for practical deployment in FLF monitoring.The findings support the integration of preprocessing techniques to improve detection accuracy,offering a foundation for real-time FLF detection systems on edge devices or drones,particularly in regions like Indonesia.
文摘AIM:To find the effective contrast enhancement method on retinal images for effective segmentation of retinal features.METHODS:A novel image preprocessing method that used neighbourhood-based improved contrast limited adaptive histogram equalization(NICLAHE)to improve retinal image contrast was suggested to aid in the accurate identification of retinal disorders and improve the visibility of fine retinal structures.Additionally,a minimal-order filter was applied to effectively denoise the images without compromising important retinal structures.The novel NICLAHE algorithm was inspired by the classical CLAHE algorithm,but enhanced it by selecting the clip limits and tile sized in a dynamical manner relative to the pixel values in an image as opposed to using fixed values.It was evaluated on the Drive and high-resolution fundus(HRF)datasets on conventional quality measures.RESULTS:The new proposed preprocessing technique was applied to two retinal image databases,Drive and HRF,with four quality metrics being,root mean square error(RMSE),peak signal to noise ratio(PSNR),root mean square contrast(RMSC),and overall contrast.The technique performed superiorly on both the data sets as compared to the traditional enhancement methods.In order to assess the compatibility of the method with automated diagnosis,a deep learning framework named ResNet was applied in the segmentation of retinal blood vessels.Sensitivity,specificity,precision and accuracy were used to analyse the performance.NICLAHE–enhanced images outperformed the traditional techniques on both the datasets with improved accuracy.CONCLUSION:NICLAHE provides better results than traditional methods with less error and improved contrastrelated values.These enhanced images are subsequently measured by sensitivity,specificity,precision,and accuracy,which yield a better result in both datasets.
基金sponsored by the National Science&Technology Major Special Project(Grant No.2011ZX05025-001-04)
文摘Eigenstructure-based coherence attributes are efficient and mature techniques for large-scale fracture detection. However, in horizontally bedded and continuous strata, buried fractures in high grayscale value zones are difficult to detect. Furthermore, middleand small-scale fractures in fractured zones where migration image energies are usually not concentrated perfectly are also hard to detect because of the fuzzy, clouded shadows owing to low grayscale values. A new fracture enhancement method combined with histogram equalization is proposed to solve these problems. With this method, the contrast between discontinuities and background in coherence images is increased, linear structures are highlighted by stepwise adjustment of the threshold of the coherence image, and fractures are detected at different scales. Application of the method shows that it can also improve fracture cognition and accuracy.
文摘A new improved algorithm of histogram equalization was discussed and actualized by analyzing the traditional algorithm. This improved algorithm has better effect than the traditional one, especially it is used to process poor quality images.
基金Sponsored by the National Key R&D Program of China(Grant No.2018YFB1308700)the Research and Development Project of Key Core Technology and Common Technology in Shanxi Province(Grant Nos.2020XXX001,2020XXX009)。
文摘Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foreground pixels and background pixels independently is proposed and investigated.Since details are mainly contained in the foreground,the weighted coupling of histogram equalization and Laplace transform were adopted to balance contrast enhancement and details preservation.The weighting factors of image foreground and background were determined by the amount of their respective information.The proposed method was conducted to images acquired from CVG⁃UGR and US⁃SIPI image databases and then compared with other methods such as clipping histogram spikes,histogram addition,and non⁃linear transformation to verify its validity.Results show that the proposed algorithm can effectively enhance the contrast without introducing distortions,and preserve the mean brightness and details well at the same time.
文摘The traditional grayscale histogram of an input image is constructed by simply counting its pixels.Hence,the classical his-togram equalization(HE)technique has fundamental defects such as overenhancement,underenhancement,and brightness drifting.This paper proposes an advanced HE based on a hybrid saliency map and a novel visual prior to addressing the defects mentioned above.First,the texture saliency map and attention weight map are constructed based on the texture saliency and visual attention mechanism.Later,the hybrid saliency map that is obtained by fusing the texture and attention weight maps is used to derive the saliency histogram.Then,a novel visual prior,the narrow dynamic range prior(NDP),is proposed,and the saliency histogram is modified by calculating the optimal parameter in combination with a binary optimization model.Next,the cumulative distribution function(CDF)is rectified to control the brightness.Finally,the hybrid saliency map is applied again for local enhancement.Compared with several state-of-the-art algorithms qualitatively and quantitatively,the proposed algorithm effectively improves the contrast of the image,generates better sub-jective visual perception,and presents better performance broadly.
基金Project supported by the IT R&D Program of MOTIE/KEIT(No.10041610)
文摘We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using outputs of selection or clustering algorithms from the background speakers' utterances. The proposed approach is used as a feature normalization method for building histograms when there are insufficient input utterance samples.In addition, the proposed method is used as an i-vector normalization method in an i-vector-based probabilistic linear discriminant analysis(PLDA) system, which is the current state-of-the-art for speaker verification. The ranks of sample values for histogram equalization are estimated in ascending order from both the input utterances and the supplement set. New ranks are obtained by computing the sum of different kinds of ranks. Subsequently, the proposed method determines the cumulative distribution function of the test utterance using the newly defined ranks. The proposed method is compared with conventional feature normalization methods, such as cepstral mean normalization(CMN), cepstral mean and variance normalization(MVN), histogram equalization(HEQ), and the European Telecommunications Standards Institute(ETSI) advanced front-end methods. In addition, performance is compared for a case in which the greedy selection algorithm is used with fuzzy C-means and K-means algorithms.The YOHO and Electronics and Telecommunications Research Institute(ETRI) databases are used in an evaluation in the feature space. The test sets are simulated by the Opus Vo IP codec. We also use the 2008 National Institute of Standards and Technology(NIST) speaker recognition evaluation(SRE) corpus for the i-vector system. The results of the experimental evaluation demonstrate that the average system performance is improved when the proposed method is used, compared to the conventional feature normalization methods.
文摘Background:Pneumonia remains a critical global health challenge,manifesting as a severe respiratory infection caused by viruses,bacteria,and fungi.Early detection is paramount for effective treatment,potentially reducing mortality rates and optimizing healthcare resource allocation.Despite the importance of chest X-ray diagnosis,image analysis presents significant challenges,particularly in regions with limited medical expertise.This study addresses these challenges by proposing a computer-aided diagnosis system leveraging targeted image preprocessing and optimized deep learning techniques.Methods:We systematically evaluated contrast limited adaptive histogram equalization with varying clip limits for preprocessing chest X-ray images,demonstrating its effectiveness in enhancing feature visibility for diagnostic accuracy.Employing a comprehensive dataset of 5,863 X-ray images(1,583 pneumonia-negative,4,280 pneumonia-positive)collected from multiple healthcare facilities,we conducted a comparative analysis of transfer learning with pre-trained models including ResNet50v2,VGG-19,and MobileNetV2.Statistical validation was performed through 5-fold cross-validation.Results:Our results show that the contrast limited adaptive histogram equalization-enhanced approach with ResNet50v2 achieves 93.40%accuracy,outperforming VGG-19(84.90%)and MobileNetV2(89.70%).Statistical validation confirms the significance of these improvements(P<0.01).The development and optimization resulted in a lightweight mobile application(74 KB)providing rapid diagnostic support(1-2 s response time).Conclusion:The proposed approach demonstrates practical applicability in resource-constrained settings,balancing diagnostic accuracy with deployment efficiency,and offers a viable solution for computer-aided pneumonia diagnosis in areas with limited medical expertise.
基金supported by the National Key R&D Program of China(No.2022YFB3205101)NSAF(No.U2230116)。
文摘To improve image quality under low illumination conditions,a novel low-light image enhancement method is proposed in this paper based on multi-illumination estimation and multi-scale fusion(MIMS).Firstly,the illumination is processed by contrast-limited adaptive histogram equalization(CLAHE),adaptive complementary gamma function(ACG),and adaptive detail preserving S-curve(ADPS),respectively,to obtain three components.Then,the fusion-relevant features,exposure,and color contrast are selected as the weight maps.Subsequently,these components and weight maps are fused through multi-scale to generate enhanced illumination.Finally,the enhanced images are obtained by multiplying the enhanced illumination and reflectance.Compared with existing approaches,this proposed method achieves an average increase of 0.81%and 2.89%in the structural similarity index measurement(SSIM)and peak signal-to-noise ratio(PSNR),and a decrease of 6.17%and 32.61%in the natural image quality evaluator(NIQE)and gradient magnitude similarity deviation(GMSD),respectively.
基金supported in part by the National Natural Science Foundation of China under Grant No.61662039in part by the Jiangxi Key Natural Science Foundation under No.20192ACBL20031+1 种基金in part by the Startup Foundation for Introducing Talent of Nanjing University of Information Science and Technology(NUIST)under Grant No.2019r070in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions(PAPD)Fund.
文摘Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,green,and blue)color model.The proposed method selects the two highest bins and two lowest bins from the image histogram,performs an equalized number of bidirectional histogram shifting repetitions on each RGB channel while embedding secret data into marked images.The proposed method simultaneously performs both right histogram shifting(RHS)and left histogram shifting(LHS)in each histogram shifting repetition to embed and split the highest bins while combining the lowest bins with their neighbors to achieve histogram equalization(HE).The least maximum number of histograms shifting repetitions among the three RGB channels is used as the default number of histograms shifting repetitions performed to enhance original images.Compared to an existing contrast enhancement method for color images and evaluated with PSNR,SSIM,RCE,and RMBE quality assessment metrics,the experimental results show that the proposed method's enhanced images are visually and qualitatively superior with a more evenly distributed histogram.The proposed method achieves higher embedding capacities and embedding rates in all images,with an average increase in embedding capacity of 52.1%.
基金This work is supported by the research project (grant No. G20000467) of the Institute of Geology and Geophysics, CAS and bythe China Postdoctoral Science Foundation (No. 2004036083).
文摘In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization technique and automatically determines the color spectra of geophysical maps. Colors can be properly distributed and visual effects and resolution can be enhanced by the method. The other method is based on the modified Radon transform and gradient calculation and is used to detect and enhance linear features in gravity and magnetic images. The method facilites the detection of line segments in the transform domain. Tests with synthetic images and real data show the methods to be effective in feature enhancement.
文摘Since image real-time processing requires vast amount of computation and high-speed hardware,it is difficult to be implemented with general microcomputer system. In order to solve the problem,a powerful digital signal processing (DSP) hardware system is proposed,which is able to meet needs of image real-time processing.There are many approaches to enhance infrared image.But only histogram equalization is discussed because it is the most common and effective way.On the basis of histogram equalization principle,the specific procedures implemented in DSP are shown.At last the experimental results are given.
基金Projects(61376076,61274026,61377024)supported by the National Natural Science Foundation of ChinaProjects(12C0108,13C321)supported by the Scientific Research Fund of Hunan Provincial Education Department,ChinaProjects(2013FJ2011,2014FJ2017,2013FJ4232)supported by the Science and Technology Plan Foundation of Hunan Province,China
文摘Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. The objective of this work is to implement the image enhancement to gray scale images using different techniques. After the fundamental methods of image enhancement processing are demonstrated, image enhancement algorithms based on space and frequency domains are systematically investigated and compared. The advantage and defect of the above-mentioned algorithms are analyzed. The algorithms of wavelet based image enhancement are also deduced and generalized. Wavelet transform modulus maxima(WTMM) is a method for detecting the fractal dimension of a signal, it is well used for image enhancement. The image techniques are compared by using the mean(μ),standard deviation(?), mean square error(MSE) and PSNR(peak signal to noise ratio). A group of experimental results demonstrate that the image enhancement algorithm based on wavelet transform is effective for image de-noising and enhancement. Wavelet transform modulus maxima method is one of the best methods for image enhancement.
文摘Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic procedures’performance and forecast accuracy.The disease’s widespread distribution and elevated mortality rate demonstrate its significance in the older-onset and younger-onset age groups.In light of research investigations,it is vital to consider age as one of the key criteria when choosing the subjects.The younger subjects are more susceptible to the perishable side than the older onset.The proposed investigation concentrated on the younger onset.The research used deep learning models and neuroimages to diagnose and categorize the disease at its early stages automatically.The proposed work is executed in three steps.The 3D input images must first undergo image pre-processing using Weiner filtering and Contrast Limited Adaptive Histogram Equalization(CLAHE)methods.The Transfer Learning(TL)models extract features,which are subsequently compressed using cascaded Auto Encoders(AE).The final phase entails using a Deep Neural Network(DNN)to classify the phases of AD.The model was trained and tested to classify the five stages of AD.The ensemble ResNet-18 and sparse autoencoder with DNN model achieved an accuracy of 98.54%.The method is compared to state-of-the-art approaches to validate its efficacy and performance.
文摘Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images are to be processed with image processing techniques and effective classification process is required for appropriate cancer diagnosis.In present scenario of medical data processing,the cancer detection process is very time consuming and exactitude.For that,this paper develops an improved model for lung cancer segmentation and classification using genetic algorithm.In the model,the input CT images are pre-processed with the filters called adaptive median filter and average filter.The filtered images are enhanced with histogram equalization and the ROI(Regions of Interest)cancer tissues are segmented using Guaranteed Convergence Particle Swarm Optimization technique.For classification of images,Probabilistic Neural Networks(PNN)based classification is used.The experimentation is carried out by simulating the model in MATLAB,with the input CT lung images LIDC-IDRI(Lung Image Database Consortium-Image Database Resource Initiative)benchmark Dataset.The results ensure that the proposed model outperforms existing methods with accurate classification results with minimal processing time.
文摘Images captured outdoor usually degenerate because of the bad weather conditions,among which fog,one of the widespread phenomena,affects the video quality greatly.The physical features of fog make the video blurred and the visible distance shortened,seriously impairing the reliability of the video system.In order to satisfy the requirement of image processing in real-time,the normal distribution curve fitting technology is used to fit the histogram of the sky part and the region growing method is used to segment the region of sky.As for the non-sky part,a method of self-adaptive interpolation to equalize the histogram is adopted to enhance the contrast of the images.Experiment results show that the method works well and will not cause block effect.
文摘The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the median filter, histogram equalization and image binarization, methods were used to remove noise and enhance images. Results showed that feature points were clear and accurate after the experiment. This simulation experiment prepared for the follow-up subsequent recognition process.
基金funded by the Deanship of Graduate Studies and Scientific Research at Jouf University under grant No.(DGSSR-2025-02-01295).
文摘Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice.
文摘Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs, parameters and activations are uniformly quantized, such that the multiplications and additions can be accelerated by bitwise operations. However, distributions of parameters in neural networks are often imbalanced, such that the uniform quantization determined from extremal values may underutilize available bitwidth. In this paper, we propose a novel quantization method that can ensure the balance of distributions of quantized values. Our method first recursively partitions the parameters by percentiles into balanced bins, and then applies uniform quantization. We also introduce computationally cheaper approximations of percentiles to reduce the computation overhead introduced. Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both convolutional neural networks and recurrent neural networks. Experiments on standard datasets including ImageNet and Penn Treebank confirm the effectiveness of our method. On ImageNet, the top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7%, which is superior to the state-of-the-arts of QNNs.
文摘Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive histogram equalization approach(CLAHE)is applied as preprocessing,in order to enhance the visual quality of the images that helps in better segmentation.Then,adaptively regularized kernel-based fuzzy C means(ARKFCM)is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.Findings-The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost.The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient,dice coefficient,precision,Matthews correlation coefficient,f-score and accuracy.The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value,which is better than the existing algorithms.Practical implications-From the experimental analysis,the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm.However,the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.Originality/value-The image preprocessing is carried out using CLAHE algorithm.The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm.In this research,the proposed algorithm has advantages such as independence of clustering parameters,robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.