Melanoma is the deadliest form of skin cancer,with an increasing incidence over recent years.Over the past decade,researchers have recognized the potential of computer vision algorithms to aid in the early diagnosis o...Melanoma is the deadliest form of skin cancer,with an increasing incidence over recent years.Over the past decade,researchers have recognized the potential of computer vision algorithms to aid in the early diagnosis of melanoma.As a result,a number of works have been dedicated to developing efficient machine learning models for its accurate classification;still,there remains a large window for improvement necessitating further research efforts.Limitations of the existing methods include lower accuracy and high computational complexity,which may be addressed by identifying and selecting the most discriminative features to improve classification accuracy.In this work,we apply transfer learning to a Nasnet-Mobile CNN model to extract deep features and augment it with a novel nature-inspired feature selection algorithm called Mutated Binary Artificial Bee Colony.The selected features are fed to multiple classifiers for final classification.We use PH2,ISIC-2016,and HAM10000 datasets for experimentation,supported by Monte Carlo simulations for thoroughly evaluating the proposed feature selection mechanism.We carry out a detailed comparison with various benchmark works in terms of convergence rate,accuracy histogram,and reduction percentage histogram,where our method reports 99.15%(2-class)and 97.5%(3-class)accuracy on the PH^(2) dataset,while 96.12%and 94.1%accuracy for the other two datasets,respectively,against minimal features.展开更多
Tissue texture reflects the spatial distribution of contrasts of image voxel gray levels,i.e.,the tissue heterogeneity,and has been recognized as important biomarkers in various clinical tasks.Spectral computed tomogr...Tissue texture reflects the spatial distribution of contrasts of image voxel gray levels,i.e.,the tissue heterogeneity,and has been recognized as important biomarkers in various clinical tasks.Spectral computed tomography(CT)is believed to be able to enrich tissue texture by providing different voxel contrast images using different X-ray energies.Therefore,this paper aims to address two related issues for clinical usage of spectral CT,especially the photon counting CT(PCCT):(1)texture enhancement by spectral CT image reconstruction,and(2)spectral energy enriched tissue texture for improved lesion classification.For issue(1),we recently proposed a tissue-specific texture prior in addition to low rank prior for the individual energy-channel low-count image reconstruction problems in PCCT under the Bayesian theory.Reconstruction results showed the proposed method outperforms existing methods of total variation(TV),low-rank TV and tensor dictionary learning in terms of not only preserving texture features but also suppressing image noise.For issue(2),this paper will investigate three models to incorporate the enriched texture by PCCT in accordance with three types of inputs:one is the spectral images,another is the cooccurrence matrices(CMs)extracted from the spectral images,and the third one is the Haralick features(HF)extracted from the CMs.Studies were performed on simulated photon counting data by introducing attenuationenergy response curve to the traditional CT images from energy integration detectors.Classification results showed the spectral CT enriched texture model can improve the area under the receiver operating characteristic curve(AUC)score by 7.3%,0.42%and 3.0%for the spectral images,CMs and HFs respectively on the five-energy spectral data over the original single energy data only.The CM-and HF-inputs can achieve the best AUC of 0.934 and 0.927.This texture themed study shows the insight that incorporating clinical important prior information,e.g.,tissue texture in this paper,into the medical imaging,such as the upstream image reconstruction,the downstream diagnosis,and so on,can benefit the clinical tasks.展开更多
The main cause of skin cancer is the ultraviolet radiation of the sun.It spreads quickly to other body parts.Thus,early diagnosis is required to decrease the mortality rate due to skin cancer.In this study,an automati...The main cause of skin cancer is the ultraviolet radiation of the sun.It spreads quickly to other body parts.Thus,early diagnosis is required to decrease the mortality rate due to skin cancer.In this study,an automatic system for Skin Lesion Classification(SLC)using Non-Subsampled Shearlet Transform(NSST)based energy features and Support Vector Machine(SVM)classifier is proposed.Atfirst,the NSST is used for the decomposition of input skin lesion images with different directions like 2,4,8 and 16.From the NSST’s sub-bands,energy fea-tures are extracted and stored in the feature database for training.SVM classifier is used for the classification of skin lesion images.The dermoscopic skin images are obtained from PH^(2) database which comprises of 200 dermoscopic color images with melanocytic lesions.The performances of the SLC system are evaluated using the confusion matrix and Receiver Operating Characteristic(ROC)curves.The SLC system achieves 96%classification accuracy using NSST’s energy fea-tures obtained from 3^(rd) level with 8-directions.展开更多
Melanoma,due to its higher mortality rate,is considered as one of the most pernicious types of skin cancers,mostly affecting the white populations.It has been reported a number of times and is now widely accepted,that...Melanoma,due to its higher mortality rate,is considered as one of the most pernicious types of skin cancers,mostly affecting the white populations.It has been reported a number of times and is now widely accepted,that early detection of melanoma increases the chances of the subject’s survival.Computer-aided diagnostic systems help the experts in diagnosing the skin lesion at earlier stages using machine learning techniques.In thiswork,we propose a framework that accurately segments,and later classifies,the lesion using improved image segmentation and fusion methods.The proposed technique takes an image and passes it through two methods simultaneously;one is the weighted visual saliency-based method,and the second is improved HDCT based saliency estimation.The resultant image maps are later fused using the proposed image fusion technique to generate a localized lesion region.The resultant binary image is later mapped back to the RGB image and fed into the Inception-ResNet-V2 pre-trained model-trained by applying transfer learning.The simulation results show improved performance compared to several existing methods.展开更多
Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,l...Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.展开更多
Leaf disease recognition using image processing and deep learning techniques is currently a vibrant research area.Most studies have focused on recognizing diseases from images of whole leaves.This approach limits the ...Leaf disease recognition using image processing and deep learning techniques is currently a vibrant research area.Most studies have focused on recognizing diseases from images of whole leaves.This approach limits the resulting models’ability to estimate leaf disease severity or identify multiple anomalies occurring on the same leaf.Recent studies have demonstrated that classifying leaf diseases based on individual lesions greatly enhances disease recognition accuracy.In those studies,however,the lesions were laboriously cropped by hand.This study proposes a semi-automatic algorithm that facilitates the fast and efficient preparation of datasets of individual lesions and leaf image pixel maps to overcome this problem.These datasets were then used to train and test lesion classifier and semantic segmentation Convolutional Neural Network(CNN)models,respectively.We report that GoogLeNet’s disease recognition accuracy improved by more than 15%when diseases were recognized from lesion images compared to when disease recognition was done using images of whole leaves.A CNN model which performs semantic segmentation of both the leaf and lesions in one pass is also proposed in this paper.The proposed KijaniNet model achieved state-of-the-art segmentation performance in terms of mean Intersection over Union(mIoU)score of 0.8448 and 0.6257 for the leaf and lesion pixel classes,respectively.In terms of mean boundary F1 score,the KijaniNet model attained 0.8241 and 0.7855 for the two pixel classes,respectively.Lastly,a fully automatic algorithm for leaf disease recognition from individual lesions is proposed.The algorithm employs the semantic segmentation network cascaded to a GoogLeNet classifier for lesion-wise disease recognition.The proposed fully automatic algorithm outperforms competing methods in terms of its superior segmentation and classification performance despite being trained on a small dataset.展开更多
基金Prince Sattam bin Abdulaziz University for funding this research work through the project number(PSAU/2024/03/31540).
文摘Melanoma is the deadliest form of skin cancer,with an increasing incidence over recent years.Over the past decade,researchers have recognized the potential of computer vision algorithms to aid in the early diagnosis of melanoma.As a result,a number of works have been dedicated to developing efficient machine learning models for its accurate classification;still,there remains a large window for improvement necessitating further research efforts.Limitations of the existing methods include lower accuracy and high computational complexity,which may be addressed by identifying and selecting the most discriminative features to improve classification accuracy.In this work,we apply transfer learning to a Nasnet-Mobile CNN model to extract deep features and augment it with a novel nature-inspired feature selection algorithm called Mutated Binary Artificial Bee Colony.The selected features are fed to multiple classifiers for final classification.We use PH2,ISIC-2016,and HAM10000 datasets for experimentation,supported by Monte Carlo simulations for thoroughly evaluating the proposed feature selection mechanism.We carry out a detailed comparison with various benchmark works in terms of convergence rate,accuracy histogram,and reduction percentage histogram,where our method reports 99.15%(2-class)and 97.5%(3-class)accuracy on the PH^(2) dataset,while 96.12%and 94.1%accuracy for the other two datasets,respectively,against minimal features.
基金This work was partially supported by the NIH/NCI,No.CA206171.
文摘Tissue texture reflects the spatial distribution of contrasts of image voxel gray levels,i.e.,the tissue heterogeneity,and has been recognized as important biomarkers in various clinical tasks.Spectral computed tomography(CT)is believed to be able to enrich tissue texture by providing different voxel contrast images using different X-ray energies.Therefore,this paper aims to address two related issues for clinical usage of spectral CT,especially the photon counting CT(PCCT):(1)texture enhancement by spectral CT image reconstruction,and(2)spectral energy enriched tissue texture for improved lesion classification.For issue(1),we recently proposed a tissue-specific texture prior in addition to low rank prior for the individual energy-channel low-count image reconstruction problems in PCCT under the Bayesian theory.Reconstruction results showed the proposed method outperforms existing methods of total variation(TV),low-rank TV and tensor dictionary learning in terms of not only preserving texture features but also suppressing image noise.For issue(2),this paper will investigate three models to incorporate the enriched texture by PCCT in accordance with three types of inputs:one is the spectral images,another is the cooccurrence matrices(CMs)extracted from the spectral images,and the third one is the Haralick features(HF)extracted from the CMs.Studies were performed on simulated photon counting data by introducing attenuationenergy response curve to the traditional CT images from energy integration detectors.Classification results showed the spectral CT enriched texture model can improve the area under the receiver operating characteristic curve(AUC)score by 7.3%,0.42%and 3.0%for the spectral images,CMs and HFs respectively on the five-energy spectral data over the original single energy data only.The CM-and HF-inputs can achieve the best AUC of 0.934 and 0.927.This texture themed study shows the insight that incorporating clinical important prior information,e.g.,tissue texture in this paper,into the medical imaging,such as the upstream image reconstruction,the downstream diagnosis,and so on,can benefit the clinical tasks.
文摘The main cause of skin cancer is the ultraviolet radiation of the sun.It spreads quickly to other body parts.Thus,early diagnosis is required to decrease the mortality rate due to skin cancer.In this study,an automatic system for Skin Lesion Classification(SLC)using Non-Subsampled Shearlet Transform(NSST)based energy features and Support Vector Machine(SVM)classifier is proposed.Atfirst,the NSST is used for the decomposition of input skin lesion images with different directions like 2,4,8 and 16.From the NSST’s sub-bands,energy fea-tures are extracted and stored in the feature database for training.SVM classifier is used for the classification of skin lesion images.The dermoscopic skin images are obtained from PH^(2) database which comprises of 200 dermoscopic color images with melanocytic lesions.The performances of the SLC system are evaluated using the confusion matrix and Receiver Operating Characteristic(ROC)curves.The SLC system achieves 96%classification accuracy using NSST’s energy fea-tures obtained from 3^(rd) level with 8-directions.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Saud University for funding this work through research Group No.(RG-1438-034)and co-authors K.A.and M.A.
文摘Melanoma,due to its higher mortality rate,is considered as one of the most pernicious types of skin cancers,mostly affecting the white populations.It has been reported a number of times and is now widely accepted,that early detection of melanoma increases the chances of the subject’s survival.Computer-aided diagnostic systems help the experts in diagnosing the skin lesion at earlier stages using machine learning techniques.In thiswork,we propose a framework that accurately segments,and later classifies,the lesion using improved image segmentation and fusion methods.The proposed technique takes an image and passes it through two methods simultaneously;one is the weighted visual saliency-based method,and the second is improved HDCT based saliency estimation.The resultant image maps are later fused using the proposed image fusion technique to generate a localized lesion region.The resultant binary image is later mapped back to the RGB image and fed into the Inception-ResNet-V2 pre-trained model-trained by applying transfer learning.The simulation results show improved performance compared to several existing methods.
基金sponsored by the National Natural Science Foundation of China Grant No.62271302the Shanghai Municipal Natural Science Foundation Grant 20ZR1423500.
文摘Large amounts of labeled data are usually needed for training deep neural networks in medical image studies,particularly in medical image classification.However,in the field of semi-supervised medical image analysis,labeled data is very scarce due to patient privacy concerns.For researchers,obtaining high-quality labeled images is exceedingly challenging because it involves manual annotation and clinical understanding.In addition,skin datasets are highly suitable for medical image classification studies due to the inter-class relationships and the inter-class similarities of skin lesions.In this paper,we propose a model called Coalition Sample Relation Consistency(CSRC),a consistency-based method that leverages Canonical Correlation Analysis(CCA)to capture the intrinsic relationships between samples.Considering that traditional consistency-based models only focus on the consistency of prediction,we additionally explore the similarity between features by using CCA.We enforce feature relation consistency based on traditional models,encouraging the model to learn more meaningful information from unlabeled data.Finally,considering that cross-entropy loss is not as suitable as the supervised loss when studying with imbalanced datasets(i.e.,ISIC 2017 and ISIC 2018),we improve the supervised loss to achieve better classification accuracy.Our study shows that this model performs better than many semi-supervised methods.
文摘Leaf disease recognition using image processing and deep learning techniques is currently a vibrant research area.Most studies have focused on recognizing diseases from images of whole leaves.This approach limits the resulting models’ability to estimate leaf disease severity or identify multiple anomalies occurring on the same leaf.Recent studies have demonstrated that classifying leaf diseases based on individual lesions greatly enhances disease recognition accuracy.In those studies,however,the lesions were laboriously cropped by hand.This study proposes a semi-automatic algorithm that facilitates the fast and efficient preparation of datasets of individual lesions and leaf image pixel maps to overcome this problem.These datasets were then used to train and test lesion classifier and semantic segmentation Convolutional Neural Network(CNN)models,respectively.We report that GoogLeNet’s disease recognition accuracy improved by more than 15%when diseases were recognized from lesion images compared to when disease recognition was done using images of whole leaves.A CNN model which performs semantic segmentation of both the leaf and lesions in one pass is also proposed in this paper.The proposed KijaniNet model achieved state-of-the-art segmentation performance in terms of mean Intersection over Union(mIoU)score of 0.8448 and 0.6257 for the leaf and lesion pixel classes,respectively.In terms of mean boundary F1 score,the KijaniNet model attained 0.8241 and 0.7855 for the two pixel classes,respectively.Lastly,a fully automatic algorithm for leaf disease recognition from individual lesions is proposed.The algorithm employs the semantic segmentation network cascaded to a GoogLeNet classifier for lesion-wise disease recognition.The proposed fully automatic algorithm outperforms competing methods in terms of its superior segmentation and classification performance despite being trained on a small dataset.