期刊文献+
共找到20篇文章
< 1 >
每页显示 20 50 100
Integration of YOLOv11 and Histogram Equalization for Fire and Smoke-Based Detection of Forest and Land Fires
1
作者 Christine Dewi Melati Viaeritas Vitrieco Santoso +3 位作者 Hanna Prillysca Chernovita Evangs Mailoa Stephen Abednego Philemon Abbott Po Shun Chen 《Computers, Materials & Continua》 2025年第9期5361-5379,共19页
Early detection of Forest and Land Fires(FLF)is essential to prevent the rapid spread of fire as well as minimize environmental damage.However,accurate detection under real-world conditions,such as low light,haze,and ... Early detection of Forest and Land Fires(FLF)is essential to prevent the rapid spread of fire as well as minimize environmental damage.However,accurate detection under real-world conditions,such as low light,haze,and complex backgrounds,remains a challenge for computer vision systems.This study evaluates the impact of three image enhancement techniques—Histogram Equalization(HE),Contrast Limited Adaptive Histogram Equalization(CLAHE),and a hybrid method called DBST-LCM CLAHE—on the performance of the YOLOv11 object detection model in identifying fires and smoke.The D-Fire dataset,consisting of 21,527 annotated images captured under diverse environmental scenarios and illumination levels,was used to train and evaluate the model.Each enhancement method was applied to the dataset before training.Model performance was assessed using multiple metrics,including Precision,Recall,mean Average Precision at 50%IoU(mAP50),F1-score,and visual inspection through bounding box results.Experimental results show that all three enhancement techniques improved detection performance.HE yielded the highest mAP50 score of 0.771,along with a balanced precision of 0.784 and recall of 0.703,demonstrating strong generalization across different conditions.DBST-LCM CLAHE achieved the highest Precision score of 79%,effectively reducing false positives,particularly in scenes with dispersed smoke or complex textures.CLAHE,with slightly lower overall metrics,contributed to improved local feature detection.Each technique showed distinct advantages:HE enhanced global contrast;CLAHE improved local structure visibility;and DBST-LCM CLAHE provided an optimal balance through dynamic block sizing and local contrast preservation.These results underline the importance of selecting preprocessing methods according to detection priorities,such as minimizing false alarms or maximizing completeness.This research does not propose a new model architecture but rather benchmarks a recent lightweight detector,YOLOv11,combined with image enhancement strategies for practical deployment in FLF monitoring.The findings support the integration of preprocessing techniques to improve detection accuracy,offering a foundation for real-time FLF detection systems on edge devices or drones,particularly in regions like Indonesia. 展开更多
关键词 histogram equalization YOLO forest and land fire detection deep learning
在线阅读 下载PDF
An improved neighbourhood-based contrast limited adaptive histogram equalization method for contrast enhancement on retinal images
2
作者 Arjuna Arulraj Jeya Sutha Mariadhason Reena Rose Ronjalis 《International Journal of Ophthalmology(English edition)》 2025年第12期2225-2236,共12页
AIM:To find the effective contrast enhancement method on retinal images for effective segmentation of retinal features.METHODS:A novel image preprocessing method that used neighbourhood-based improved contrast limited... AIM:To find the effective contrast enhancement method on retinal images for effective segmentation of retinal features.METHODS:A novel image preprocessing method that used neighbourhood-based improved contrast limited adaptive histogram equalization(NICLAHE)to improve retinal image contrast was suggested to aid in the accurate identification of retinal disorders and improve the visibility of fine retinal structures.Additionally,a minimal-order filter was applied to effectively denoise the images without compromising important retinal structures.The novel NICLAHE algorithm was inspired by the classical CLAHE algorithm,but enhanced it by selecting the clip limits and tile sized in a dynamical manner relative to the pixel values in an image as opposed to using fixed values.It was evaluated on the Drive and high-resolution fundus(HRF)datasets on conventional quality measures.RESULTS:The new proposed preprocessing technique was applied to two retinal image databases,Drive and HRF,with four quality metrics being,root mean square error(RMSE),peak signal to noise ratio(PSNR),root mean square contrast(RMSC),and overall contrast.The technique performed superiorly on both the data sets as compared to the traditional enhancement methods.In order to assess the compatibility of the method with automated diagnosis,a deep learning framework named ResNet was applied in the segmentation of retinal blood vessels.Sensitivity,specificity,precision and accuracy were used to analyse the performance.NICLAHE–enhanced images outperformed the traditional techniques on both the datasets with improved accuracy.CONCLUSION:NICLAHE provides better results than traditional methods with less error and improved contrastrelated values.These enhanced images are subsequently measured by sensitivity,specificity,precision,and accuracy,which yield a better result in both datasets. 展开更多
关键词 contrast limited adaptive histogram equalization retinal imaging image preprocessing contrast enhancement
原文传递
A fracture enhancement method based on the histogram equalization of eigenstructure-based coherence 被引量:7
3
作者 窦喜英 韩立国 +3 位作者 王恩利 董雪华 杨庆 鄢高韩 《Applied Geophysics》 SCIE CSCD 2014年第2期179-185,253,共8页
Eigenstructure-based coherence attributes are efficient and mature techniques for large-scale fracture detection. However, in horizontally bedded and continuous strata, buried fractures in high grayscale value zones a... Eigenstructure-based coherence attributes are efficient and mature techniques for large-scale fracture detection. However, in horizontally bedded and continuous strata, buried fractures in high grayscale value zones are difficult to detect. Furthermore, middleand small-scale fractures in fractured zones where migration image energies are usually not concentrated perfectly are also hard to detect because of the fuzzy, clouded shadows owing to low grayscale values. A new fracture enhancement method combined with histogram equalization is proposed to solve these problems. With this method, the contrast between discontinuities and background in coherence images is increased, linear structures are highlighted by stepwise adjustment of the threshold of the coherence image, and fractures are detected at different scales. Application of the method shows that it can also improve fracture cognition and accuracy. 展开更多
关键词 FAULT FRACTURE histogram equalization COHERENCE ENHANCEMENT
在线阅读 下载PDF
Improved Algorithm of Histogram Equalization and Its Actualization 被引量:2
4
作者 仲伟波 赵福军 +1 位作者 李鹏 宁书年 《Journal of China University of Mining and Technology》 2003年第1期52-54,共3页
A new improved algorithm of histogram equalization was discussed and actualized by analyzing the traditional algorithm. This improved algorithm has better effect than the traditional one, especially it is used to proc... A new improved algorithm of histogram equalization was discussed and actualized by analyzing the traditional algorithm. This improved algorithm has better effect than the traditional one, especially it is used to process poor quality images. 展开更多
关键词 IMPROVE histogram equalization ALGORITHM GRAY
在线阅读 下载PDF
Contrast Enhancement Using Weighted Coupled Histogram Equalization with Laplace Transform 被引量:1
5
作者 Huimin Hao Wenbin Xin +4 位作者 Minglong Bu He Wang Yuan Lan Xiaoyan Xiong Jiahai Huang 《Journal of Harbin Institute of Technology(New Series)》 CAS 2022年第4期32-40,共9页
Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foregro... Histogram equalization is a traditional algorithm improving the image contrast,but it comes at the cost of mean brightness shift and details loss.In order to solve these problems,a novel approach to processing foreground pixels and background pixels independently is proposed and investigated.Since details are mainly contained in the foreground,the weighted coupling of histogram equalization and Laplace transform were adopted to balance contrast enhancement and details preservation.The weighting factors of image foreground and background were determined by the amount of their respective information.The proposed method was conducted to images acquired from CVG⁃UGR and US⁃SIPI image databases and then compared with other methods such as clipping histogram spikes,histogram addition,and non⁃linear transformation to verify its validity.Results show that the proposed algorithm can effectively enhance the contrast without introducing distortions,and preserve the mean brightness and details well at the same time. 展开更多
关键词 contrast enhancement weighted processing histogram equalization Laplace transform
在线阅读 下载PDF
Advanced Histogram Equalization Based on a Hybrid Saliency Map and Novel Visual Prior
6
作者 Yuanbin Wu Shengkui Dai Zhan Ma 《Machine Intelligence Research》 EI CSCD 2024年第6期1178-1191,共14页
The traditional grayscale histogram of an input image is constructed by simply counting its pixels.Hence,the classical his-togram equalization(HE)technique has fundamental defects such as overenhancement,underenhancem... The traditional grayscale histogram of an input image is constructed by simply counting its pixels.Hence,the classical his-togram equalization(HE)technique has fundamental defects such as overenhancement,underenhancement,and brightness drifting.This paper proposes an advanced HE based on a hybrid saliency map and a novel visual prior to addressing the defects mentioned above.First,the texture saliency map and attention weight map are constructed based on the texture saliency and visual attention mechanism.Later,the hybrid saliency map that is obtained by fusing the texture and attention weight maps is used to derive the saliency histogram.Then,a novel visual prior,the narrow dynamic range prior(NDP),is proposed,and the saliency histogram is modified by calculating the optimal parameter in combination with a binary optimization model.Next,the cumulative distribution function(CDF)is rectified to control the brightness.Finally,the hybrid saliency map is applied again for local enhancement.Compared with several state-of-the-art algorithms qualitatively and quantitatively,the proposed algorithm effectively improves the contrast of the image,generates better sub-jective visual perception,and presents better performance broadly. 展开更多
关键词 Texture saliency attention mechanism narrow dynamic range prior(NDP) brightness control histogram equalization(HE) local enhancement
原文传递
Histogram equalization using a reduced feature set of background speakers' utterances for speaker recognition 被引量:1
7
作者 Myung-jae KIM Il-ho YANG +1 位作者 Min-seok KIM Ha-jin YU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2017年第5期738-750,共13页
We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using output... We propose a method for histogram equalization using supplement sets to improve the performance of speaker recognition when the training and test utterances are very short. The supplement sets are derived using outputs of selection or clustering algorithms from the background speakers' utterances. The proposed approach is used as a feature normalization method for building histograms when there are insufficient input utterance samples.In addition, the proposed method is used as an i-vector normalization method in an i-vector-based probabilistic linear discriminant analysis(PLDA) system, which is the current state-of-the-art for speaker verification. The ranks of sample values for histogram equalization are estimated in ascending order from both the input utterances and the supplement set. New ranks are obtained by computing the sum of different kinds of ranks. Subsequently, the proposed method determines the cumulative distribution function of the test utterance using the newly defined ranks. The proposed method is compared with conventional feature normalization methods, such as cepstral mean normalization(CMN), cepstral mean and variance normalization(MVN), histogram equalization(HEQ), and the European Telecommunications Standards Institute(ETSI) advanced front-end methods. In addition, performance is compared for a case in which the greedy selection algorithm is used with fuzzy C-means and K-means algorithms.The YOHO and Electronics and Telecommunications Research Institute(ETRI) databases are used in an evaluation in the feature space. The test sets are simulated by the Opus Vo IP codec. We also use the 2008 National Institute of Standards and Technology(NIST) speaker recognition evaluation(SRE) corpus for the i-vector system. The results of the experimental evaluation demonstrate that the average system performance is improved when the proposed method is used, compared to the conventional feature normalization methods. 展开更多
关键词 Speaker recognition histogram equalization i-vector
原文传递
Enhanced pneumonia detection:leveraging CLAHE in a mobile application
8
作者 Wilny Wilson P J D Dorathi Jayaseeli 《Biomedical Engineering Communications》 2025年第4期18-35,共18页
Background:Pneumonia remains a critical global health challenge,manifesting as a severe respiratory infection caused by viruses,bacteria,and fungi.Early detection is paramount for effective treatment,potentially reduc... Background:Pneumonia remains a critical global health challenge,manifesting as a severe respiratory infection caused by viruses,bacteria,and fungi.Early detection is paramount for effective treatment,potentially reducing mortality rates and optimizing healthcare resource allocation.Despite the importance of chest X-ray diagnosis,image analysis presents significant challenges,particularly in regions with limited medical expertise.This study addresses these challenges by proposing a computer-aided diagnosis system leveraging targeted image preprocessing and optimized deep learning techniques.Methods:We systematically evaluated contrast limited adaptive histogram equalization with varying clip limits for preprocessing chest X-ray images,demonstrating its effectiveness in enhancing feature visibility for diagnostic accuracy.Employing a comprehensive dataset of 5,863 X-ray images(1,583 pneumonia-negative,4,280 pneumonia-positive)collected from multiple healthcare facilities,we conducted a comparative analysis of transfer learning with pre-trained models including ResNet50v2,VGG-19,and MobileNetV2.Statistical validation was performed through 5-fold cross-validation.Results:Our results show that the contrast limited adaptive histogram equalization-enhanced approach with ResNet50v2 achieves 93.40%accuracy,outperforming VGG-19(84.90%)and MobileNetV2(89.70%).Statistical validation confirms the significance of these improvements(P<0.01).The development and optimization resulted in a lightweight mobile application(74 KB)providing rapid diagnostic support(1-2 s response time).Conclusion:The proposed approach demonstrates practical applicability in resource-constrained settings,balancing diagnostic accuracy with deployment efficiency,and offers a viable solution for computer-aided pneumonia diagnosis in areas with limited medical expertise. 展开更多
关键词 PNEUMONIA contrast limited adaptive histogram equalization deep learning mobile application chest X-ray transfer learning
在线阅读 下载PDF
Low-light image enhancement based on multi-illumination estimation and multi-scale fusion
9
作者 ZHANG Xin'ai GAO Jing +1 位作者 NIE Kaiming LUO Tao 《Optoelectronics Letters》 2025年第6期362-369,共8页
To improve image quality under low illumination conditions,a novel low-light image enhancement method is proposed in this paper based on multi-illumination estimation and multi-scale fusion(MIMS).Firstly,the illuminat... To improve image quality under low illumination conditions,a novel low-light image enhancement method is proposed in this paper based on multi-illumination estimation and multi-scale fusion(MIMS).Firstly,the illumination is processed by contrast-limited adaptive histogram equalization(CLAHE),adaptive complementary gamma function(ACG),and adaptive detail preserving S-curve(ADPS),respectively,to obtain three components.Then,the fusion-relevant features,exposure,and color contrast are selected as the weight maps.Subsequently,these components and weight maps are fused through multi-scale to generate enhanced illumination.Finally,the enhanced images are obtained by multiplying the enhanced illumination and reflectance.Compared with existing approaches,this proposed method achieves an average increase of 0.81%and 2.89%in the structural similarity index measurement(SSIM)and peak signal-to-noise ratio(PSNR),and a decrease of 6.17%and 32.61%in the natural image quality evaluator(NIQE)and gradient magnitude similarity deviation(GMSD),respectively. 展开更多
关键词 adaptive detail preserving s curve contrast limited adaptive histogram equalization adaptive complementary gamma function low light image enhancement equalization clahe adaptive complementary gamma function acg multi scale fusion weight maps multi illumination estimation
原文传递
A Bi-Histogram Shifting Contrast Enhancement for Color Images
10
作者 Lord Amoah Ampofo Twumasi Kwabena 《Journal of Quantum Computing》 2021年第2期65-77,共13页
Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,gr... Recent contrast enhancement(CE)methods,with a few exceptions,predominantly focus on enhancing gray-scale images.This paper proposes a bi-histogram shifting contrast enhancement for color images based on the RGB(red,green,and blue)color model.The proposed method selects the two highest bins and two lowest bins from the image histogram,performs an equalized number of bidirectional histogram shifting repetitions on each RGB channel while embedding secret data into marked images.The proposed method simultaneously performs both right histogram shifting(RHS)and left histogram shifting(LHS)in each histogram shifting repetition to embed and split the highest bins while combining the lowest bins with their neighbors to achieve histogram equalization(HE).The least maximum number of histograms shifting repetitions among the three RGB channels is used as the default number of histograms shifting repetitions performed to enhance original images.Compared to an existing contrast enhancement method for color images and evaluated with PSNR,SSIM,RCE,and RMBE quality assessment metrics,the experimental results show that the proposed method's enhanced images are visually and qualitatively superior with a more evenly distributed histogram.The proposed method achieves higher embedding capacities and embedding rates in all images,with an average increase in embedding capacity of 52.1%. 展开更多
关键词 Contrast enhancement bi-histogram shifting histogram equalization
在线阅读 下载PDF
Application of Image Enhancement Techniques to Potential Field Data 被引量:6
11
作者 张丽莉 郝天珧 +1 位作者 吴健生 王家林 《Applied Geophysics》 SCIE CSCD 2005年第3期145-152,i0001,共9页
In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization tec... In this paper the application of image enhancement techniques to potential field data is briefly described and two improved enhancement methods are introduced. One method is derived from the histogram equalization technique and automatically determines the color spectra of geophysical maps. Colors can be properly distributed and visual effects and resolution can be enhanced by the method. The other method is based on the modified Radon transform and gradient calculation and is used to detect and enhance linear features in gravity and magnetic images. The method facilites the detection of line segments in the transform domain. Tests with synthetic images and real data show the methods to be effective in feature enhancement. 展开更多
关键词 image enhancement histogram equalization Radon transform and potential field data
在线阅读 下载PDF
Infrared Image Real-time Enhancement Based on DSP 被引量:2
12
作者 DAIShao-sheng YUANXiang-hui XUELian 《Semiconductor Photonics and Technology》 CAS 2004年第1期58-61,共4页
Since image real-time processing requires vast amount of computation and high-speed hardware,it is difficult to be implemented with general microcomputer system. In order to solve the problem,a powerful digital signal... Since image real-time processing requires vast amount of computation and high-speed hardware,it is difficult to be implemented with general microcomputer system. In order to solve the problem,a powerful digital signal processing (DSP) hardware system is proposed,which is able to meet needs of image real-time processing.There are many approaches to enhance infrared image.But only histogram equalization is discussed because it is the most common and effective way.On the basis of histogram equalization principle,the specific procedures implemented in DSP are shown.At last the experimental results are given. 展开更多
关键词 DSP Infrared image histogram equalization
在线阅读 下载PDF
Comparative analysis of different methods for image enhancement 被引量:4
13
作者 吴笑峰 胡仕刚 +4 位作者 赵瑾 李志明 李劲 唐志军 席在芳 《Journal of Central South University》 SCIE EI CAS 2014年第12期4563-4570,共8页
Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. T... Image enhancement technology plays a very important role to improve image quality in image processing. By enhancing some information and restraining other information selectively, it can improve image visual effect. The objective of this work is to implement the image enhancement to gray scale images using different techniques. After the fundamental methods of image enhancement processing are demonstrated, image enhancement algorithms based on space and frequency domains are systematically investigated and compared. The advantage and defect of the above-mentioned algorithms are analyzed. The algorithms of wavelet based image enhancement are also deduced and generalized. Wavelet transform modulus maxima(WTMM) is a method for detecting the fractal dimension of a signal, it is well used for image enhancement. The image techniques are compared by using the mean(μ),standard deviation(?), mean square error(MSE) and PSNR(peak signal to noise ratio). A group of experimental results demonstrate that the image enhancement algorithm based on wavelet transform is effective for image de-noising and enhancement. Wavelet transform modulus maxima method is one of the best methods for image enhancement. 展开更多
关键词 image enhancement wavelet transform histogram equalization unsharp masking(UM) modulus maxl mum threshold
在线阅读 下载PDF
Alzheimer’s Disease Stage Classification Using a Deep Transfer Learning and Sparse Auto Encoder Method 被引量:1
14
作者 Deepthi K.Oommen J.Arunnehru 《Computers, Materials & Continua》 SCIE EI 2023年第7期793-811,共19页
Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic pro... Alzheimer’s Disease(AD)is a progressive neurological disease.Early diagnosis of this illness using conventional methods is very challenging.Deep Learning(DL)is one of the finest solutions for improving diagnostic procedures’performance and forecast accuracy.The disease’s widespread distribution and elevated mortality rate demonstrate its significance in the older-onset and younger-onset age groups.In light of research investigations,it is vital to consider age as one of the key criteria when choosing the subjects.The younger subjects are more susceptible to the perishable side than the older onset.The proposed investigation concentrated on the younger onset.The research used deep learning models and neuroimages to diagnose and categorize the disease at its early stages automatically.The proposed work is executed in three steps.The 3D input images must first undergo image pre-processing using Weiner filtering and Contrast Limited Adaptive Histogram Equalization(CLAHE)methods.The Transfer Learning(TL)models extract features,which are subsequently compressed using cascaded Auto Encoders(AE).The final phase entails using a Deep Neural Network(DNN)to classify the phases of AD.The model was trained and tested to classify the five stages of AD.The ensemble ResNet-18 and sparse autoencoder with DNN model achieved an accuracy of 98.54%.The method is compared to state-of-the-art approaches to validate its efficacy and performance. 展开更多
关键词 Alzheimer’s disease mild cognitive impairment Weiner filter contrast limited adaptive histogram equalization transfer learning sparse autoencoder deep neural network
在线阅读 下载PDF
Improved Model for Genetic Algorithm-Based Accurate Lung Cancer Segmentation and Classification
15
作者 K.Jagadeesh A.Rajendran 《Computer Systems Science & Engineering》 SCIE EI 2023年第5期2017-2032,共16页
Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images ... Lung Cancer is one of the hazardous diseases that have to be detected in earlier stages for providing better treatment and clinical support to patients.For lung cancer diagnosis,the computed tomography(CT)scan images are to be processed with image processing techniques and effective classification process is required for appropriate cancer diagnosis.In present scenario of medical data processing,the cancer detection process is very time consuming and exactitude.For that,this paper develops an improved model for lung cancer segmentation and classification using genetic algorithm.In the model,the input CT images are pre-processed with the filters called adaptive median filter and average filter.The filtered images are enhanced with histogram equalization and the ROI(Regions of Interest)cancer tissues are segmented using Guaranteed Convergence Particle Swarm Optimization technique.For classification of images,Probabilistic Neural Networks(PNN)based classification is used.The experimentation is carried out by simulating the model in MATLAB,with the input CT lung images LIDC-IDRI(Lung Image Database Consortium-Image Database Resource Initiative)benchmark Dataset.The results ensure that the proposed model outperforms existing methods with accurate classification results with minimal processing time. 展开更多
关键词 Cancer diagnosis SEGMENTATION ENHANCEMENT histogram equalization probabilistic rate neural networks(PNN) classification
在线阅读 下载PDF
A Fast Algorithm for Improving the Visual Distance in Fog
16
作者 YANG Wei XIAO Zhi-tao +1 位作者 YU Jian YAN Zhi-jie 《Semiconductor Photonics and Technology》 CAS 2009年第4期241-246,共6页
Images captured outdoor usually degenerate because of the bad weather conditions,among which fog,one of the widespread phenomena,affects the video quality greatly.The physical features of fog make the video blurred an... Images captured outdoor usually degenerate because of the bad weather conditions,among which fog,one of the widespread phenomena,affects the video quality greatly.The physical features of fog make the video blurred and the visible distance shortened,seriously impairing the reliability of the video system.In order to satisfy the requirement of image processing in real-time,the normal distribution curve fitting technology is used to fit the histogram of the sky part and the region growing method is used to segment the region of sky.As for the non-sky part,a method of self-adaptive interpolation to equalize the histogram is adopted to enhance the contrast of the images.Experiment results show that the method works well and will not cause block effect. 展开更多
关键词 fog image INTERPOLATION region growing histogram equalization fast algorithm normaldistribution
在线阅读 下载PDF
Image Preprocessing Methods Used in Meteorological Measurement of the Temperature Testing System
17
作者 Jiajia Zhang Yu Liu He Wang 《Journal of Geoscience and Environment Protection》 2016年第11期1-5,共5页
The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the... The meteorological measurement automatic temperature testing system has a defective image. To solve the problem such as noise and insufficient contrast, and put forward the research program for image pretreatment, the median filter, histogram equalization and image binarization, methods were used to remove noise and enhance images. Results showed that feature points were clear and accurate after the experiment. This simulation experiment prepared for the follow-up subsequent recognition process. 展开更多
关键词 Temperature Testing System Thermometer Image Image Pretreatment Median Filter histogram equalization Image Binarization
在线阅读 下载PDF
A Hybrid Deep Learning Multi-Class Classification Model for Alzheimer’s Disease Using Enhanced MRI Images
18
作者 Ghadah Naif Alwakid 《Computers, Materials & Continua》 2026年第1期797-821,共25页
Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often stru... Alzheimer’s Disease(AD)is a progressive neurodegenerative disorder that significantly affects cognitive function,making early and accurate diagnosis essential.Traditional Deep Learning(DL)-based approaches often struggle with low-contrast MRI images,class imbalance,and suboptimal feature extraction.This paper develops a Hybrid DL system that unites MobileNetV2 with adaptive classification methods to boost Alzheimer’s diagnosis by processing MRI scans.Image enhancement is done using Contrast-Limited Adaptive Histogram Equalization(CLAHE)and Enhanced Super-Resolution Generative Adversarial Networks(ESRGAN).A classification robustness enhancement system integrates class weighting techniques and a Matthews Correlation Coefficient(MCC)-based evaluation method into the design.The trained and validated model gives a 98.88%accuracy rate and 0.9614 MCC score.We also performed a 10-fold cross-validation experiment with an average accuracy of 96.52%(±1.51),a loss of 0.1671,and an MCC score of 0.9429 across folds.The proposed framework outperforms the state-of-the-art models with a 98%weighted F1-score while decreasing misdiagnosis results for every AD stage.The model demonstrates apparent separation abilities between AD progression stages according to the results of the confusion matrix analysis.These results validate the effectiveness of hybrid DL models with adaptive preprocessing for early and reliable Alzheimer’s diagnosis,contributing to improved computer-aided diagnosis(CAD)systems in clinical practice. 展开更多
关键词 Alzheimer’s disease deep learning MRI images MobileNetV2 contrast-limited adaptive histogram equalization(CLAHE) enhanced super-resolution generative adversarial networks(ESRGAN) multi-class classification
在线阅读 下载PDF
Balanced Quantization: An Effective and Efficient Approach toQuantized Neural Networks 被引量:4
19
作者 Shu-Chang Zhou Yu-Zhi Wang +2 位作者 He Wen Qin-Yao He Yu-Heng Zou 《Journal of Computer Science & Technology》 SCIE EI CSCD 2017年第4期667-682,共16页
Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs... Quantized neural networks (QNNs), which use low bitwidth numbers for representing parameters and performing computations, have been proposed to reduce the computation complexity, storage size and memory usage. In QNNs, parameters and activations are uniformly quantized, such that the multiplications and additions can be accelerated by bitwise operations. However, distributions of parameters in neural networks are often imbalanced, such that the uniform quantization determined from extremal values may underutilize available bitwidth. In this paper, we propose a novel quantization method that can ensure the balance of distributions of quantized values. Our method first recursively partitions the parameters by percentiles into balanced bins, and then applies uniform quantization. We also introduce computationally cheaper approximations of percentiles to reduce the computation overhead introduced. Overall, our method improves the prediction accuracies of QNNs without introducing extra computation during inference, has negligible impact on training speed, and is applicable to both convolutional neural networks and recurrent neural networks. Experiments on standard datasets including ImageNet and Penn Treebank confirm the effectiveness of our method. On ImageNet, the top-5 error rate of our 4-bit quantized GoogLeNet model is 12.7%, which is superior to the state-of-the-arts of QNNs. 展开更多
关键词 quantized neural network percentile histogram equalization uniform quantization
原文传递
Ultrasound liver tumor segmentation using adaptively regularized kernel-based fuzzy C means with enhanced level set algorithm
20
作者 Deepak S.Uplaonkar Virupakshappa Nagabhushan Patil 《International Journal of Intelligent Computing and Cybernetics》 EI 2022年第3期438-453,共16页
Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive ... Purpose-The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.Design/methodology/approach-After collecting the ultrasound images,contrast-limited adaptive histogram equalization approach(CLAHE)is applied as preprocessing,in order to enhance the visual quality of the images that helps in better segmentation.Then,adaptively regularized kernel-based fuzzy C means(ARKFCM)is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.Findings-The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost.The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient,dice coefficient,precision,Matthews correlation coefficient,f-score and accuracy.The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value,which is better than the existing algorithms.Practical implications-From the experimental analysis,the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm.However,the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.Originality/value-The image preprocessing is carried out using CLAHE algorithm.The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm.In this research,the proposed algorithm has advantages such as independence of clustering parameters,robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost. 展开更多
关键词 Adaptively regularized kernel-based fuzzy C means Contrast-limited adaptive histogram equalization Level set algorithm Liver tumor segmentation Local ternary pattern
在线阅读 下载PDF
上一页 1 下一页 到第
使用帮助 返回顶部