Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Aug...Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Augmentation(BSDA)with a Vision Mamba-based model for medical image classification(MedMamba),enhanced by residual connection blocks,we named the model BSDA-Mamba.BSDA augments medical image data semantically,enhancing the model’s generalization ability and classification performance.MedMamba,a deep learning-based state space model,excels in capturing long-range dependencies in medical images.By incorporating residual connections,BSDA-Mamba further improves feature extraction capabilities.Through comprehensive experiments on eight medical image datasets,we demonstrate that BSDA-Mamba outperforms existing models in accuracy,area under the curve,and F1-score.Our results highlight BSDA-Mamba’s potential as a reliable tool for medical image analysis,particularly in handling diverse imaging modalities from X-rays to MRI.The open-sourcing of our model’s code and datasets,will facilitate the reproduction and extension of our work.展开更多
Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hier...Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.展开更多
The performance of medical image classification has been enhanced by deep convolutional neural networks(CNNs),which are typically trained with cross-entropy(CE)loss.However,when the label presents an intrinsic ordinal...The performance of medical image classification has been enhanced by deep convolutional neural networks(CNNs),which are typically trained with cross-entropy(CE)loss.However,when the label presents an intrinsic ordinal property in nature,e.g.,the development from benign to malignant tumor,CE loss cannot take into account such ordinal information to allow for better generalization.To improve model generalization with ordinal information,we propose a novel meta ordinal regression forest(MORF)method for medical image classification with ordinal labels,which learns the ordinal relationship through the combination of convolutional neural network and differential forest in a meta-learning framework.The merits of the proposed MORF come from the following two components:A tree-wise weighting net(TWW-Net)and a grouped feature selection(GFS)module.First,the TWW-Net assigns each tree in the forest with a specific weight that is mapped from the classification loss of the corresponding tree.Hence,all the trees possess varying weights,which is helpful for alleviating the tree-wise prediction variance.Second,the GFS module enables a dynamic forest rather than a fixed one that was previously used,allowing for random feature perturbation.During training,we alternatively optimize the parameters of the CNN backbone and TWW-Net in the meta-learning framework through calculating the Hessian matrix.Experimental results on two medical image classification datasets with ordinal labels,i.e.,LIDC-IDRI and Breast Ultrasound datasets,demonstrate the superior performances of our MORF method over existing state-of-the-art methods.展开更多
Medical image classification has played an important role in the medical field, and the related method based on deep learning has become an important and powerful technique in medical image classification. In this art...Medical image classification has played an important role in the medical field, and the related method based on deep learning has become an important and powerful technique in medical image classification. In this article, we propose a simplified inception module based Hadamard attention (SI + HA) mechanism for medical image classification. Specifically, we propose a new attention mechanism: Hadamard attention mechanism. It improves the accuracy of medical image classification without greatly increasing the complexity of the model. Meanwhile, we adopt a simplified inception module to improve the utilization of parameters. We use two medical image datasets to prove the superiority of our proposed method. In the BreakHis dataset, the AUCs of our method can reach 98.74%, 98.38%, 98.61% and 97.67% under the magnification factors of 40×, 100×, 200× and 400×, respectively. The accuracies can reach 95.67%, 94.17%, 94.53% and 94.12% under the magnification factors of 40×, 100×, 200× and 400×, respectively. In the KIMIA Path 960 dataset, the AUCs and accuracy of our method can reach 99.91% and 99.03%. It is superior to the currently popular methods and can significantly improve the effectiveness of medical image classification.展开更多
Computer-aided diagnosis(CAD)can detect tuberculosis(TB)cases,providing radiologists with more accurate and efficient diagnostic solutions.Various noise information in TB chest X-ray(CXR)images is a major challenge in...Computer-aided diagnosis(CAD)can detect tuberculosis(TB)cases,providing radiologists with more accurate and efficient diagnostic solutions.Various noise information in TB chest X-ray(CXR)images is a major challenge in this classification task.This study aims to propose a model with high performance in TB CXR image detection named multi-scale input mirror network(MIM-Net)based on CXR image symmetry,which consists of a multi-scale input feature extraction network and mirror loss.The multi-scale image input can enhance feature extraction,while the mirror loss can improve the network performance through self-supervision.We used a publicly available TB CXR image classification dataset to evaluate our proposed method via 5-fold cross-validation,with accuracy,sensitivity,specificity,positive predictive value,negative predictive value,and area under curve(AUC)of 99.67%,100%,99.60%,99.80%,100%,and 0.9999,respectively.Compared to other models,MIM-Net performed best in all metrics.Therefore,the proposed MIM-Net can effectively help the network learn more features and can be used to detect TB in CXR images,thus assisting doctors in diagnosing.展开更多
The evolving field of Alzheimer’s disease(AD)diagnosis has greatly benefited from deep learning models for analyzing brain magnetic resonance(MR)images.This study introduces Dynamic GradNet,a novel deep learning mode...The evolving field of Alzheimer’s disease(AD)diagnosis has greatly benefited from deep learning models for analyzing brain magnetic resonance(MR)images.This study introduces Dynamic GradNet,a novel deep learning model designed to increase diagnostic accuracy and interpretability for multiclass AD classification.Initially,four state-of-the-art convolutional neural network(CNN)architectures,the self-regulated network(RegNet),residual network(ResNet),densely connected convolutional network(DenseNet),and efficient network(EfficientNet),were comprehensively compared via a unified preprocessing pipeline to ensure a fair evaluation.Among these models,EfficientNet consistently demonstrated superior performance in terms of accuracy,precision,recall,and F1 score.As a result,EfficientNetwas selected as the foundation for implementing Dynamic GradNet.Dynamic GradNet incorporates gradient weighted class activation mapping(GradCAM)into the training process,facilitating dynamic adjustments that focus on critical brain regions associated with early dementia detection.These adjustments are particularly effective in identifying subtle changes associated with very mild dementia,enabling early diagnosis and intervention.The model was evaluated with the OASIS dataset,which contains greater than 80,000 brain MR images categorized into four distinct stages of AD progression.The proposed model outperformed the baseline architectures,achieving remarkable generalizability across all stages.This findingwas especially evident in early-stage dementia detection,where Dynamic GradNet significantly reduced false positives and enhanced classification metrics.These findings highlight the potential of Dynamic GradNet as a robust and scalable approach for AD diagnosis,providing a promising alternative to traditional attention-based models.The model’s ability to dynamically adjust spatial focus offers a powerful tool in artificial intelligence(AI)assisted precisionmedicine,particularly in the early detection of neurodegenerative diseases.展开更多
Alzheimer’s disease(AD)is a significant challenge in modern healthcare,with early detection and accurate staging remaining critical priorities for effective intervention.While Deep Learning(DL)approaches have shown p...Alzheimer’s disease(AD)is a significant challenge in modern healthcare,with early detection and accurate staging remaining critical priorities for effective intervention.While Deep Learning(DL)approaches have shown promise in AD diagnosis,existing methods often struggle with the issues of precision,interpretability,and class imbalance.This study presents a novel framework that integrates DL with several eXplainable Artificial Intelligence(XAI)techniques,in particular attention mechanisms,Gradient-Weighted Class Activation Mapping(Grad-CAM),and Local Interpretable Model-Agnostic Explanations(LIME),to improve bothmodel interpretability and feature selection.The study evaluates four different DL architectures(ResMLP,VGG16,Xception,and Convolutional Neural Network(CNN)with attention mechanism)on a balanced dataset of 3714 MRI brain scans from patients aged 70 and older.The proposed CNN with attention model achieved superior performance,demonstrating 99.18%accuracy on the primary dataset and 96.64% accuracy on the ADNI dataset,significantly advancing the state-of-the-art in AD classification.The ability of the framework to provide comprehensive,interpretable results through multiple visualization techniques while maintaining high classification accuracy represents a significant advancement in the computational diagnosis of AD,potentially enabling more accurate and earlier intervention in clinical settings.展开更多
The major mortality factor relevant to the intestinal tract is the growth of tumorous cells(polyps)in various parts.More specifically,colonic polyps have a high rate and are recognized as a precursor of colon cancer g...The major mortality factor relevant to the intestinal tract is the growth of tumorous cells(polyps)in various parts.More specifically,colonic polyps have a high rate and are recognized as a precursor of colon cancer growth.Endoscopy is the conventional technique for detecting colon polyps,and considerable research has proved that automated diagnosis of image regions that might have polyps within the colon might be used to help experts for decreasing the polyp miss rate.The automated diagnosis of polyps in a computer-aided diagnosis(CAD)method is implemented using statistical analysis.Nowadays,Deep Learning,particularly throughConvolution Neural networks(CNN),is broadly employed to allowthe extraction of representative features.This manuscript devises a new Northern Goshawk Optimization with Transfer Learning Model for Colonic Polyp Detection and Classification(NGOTL-CPDC)model.The NGOTL-CPDC technique aims to investigate endoscopic images for automated colonic polyp detection.To accomplish this,the NGOTL-CPDC technique comprises of adaptive bilateral filtering(ABF)technique as a noise removal process and image pre-processing step.Besides,the NGOTL-CPDC model applies the Faster SqueezeNet model for feature extraction purposes in which the hyperparameter tuning process is performed using the NGO optimizer.Finally,the fuzzy Hopfield neural network(FHNN)method can be employed for colonic poly detection and classification.A widespread simulation analysis is carried out to ensure the improved outcomes of the NGOTL-CPDC model.The comparison study demonstrates the enhancements of the NGOTL-CPDC model on the colonic polyp classification process on medical test images.展开更多
Lung nodule classification based on low-dose computed tomography(LDCT)images has attracted major attention thanks to the reduced radiation dose and its potential for early diagnosis of lung cancer from LDCT-based lung...Lung nodule classification based on low-dose computed tomography(LDCT)images has attracted major attention thanks to the reduced radiation dose and its potential for early diagnosis of lung cancer from LDCT-based lung cancer screening.However,LDCT images suffer from severe noise,largely influencing the performance of lung nodule classification.Current methods combining denoising and classification tasks typically require the corresponding normal-dose CT(NDCT)images as the supervision for the denoising task,which is impractical in the context of clinical diagnosis using LDCT.To jointly train these two tasks in a unified framework without the NDCT images,this paper introduces a novel self-supervised method,termed strided Noise2Neighbors or SN2N,for blind medical image denoising and lung nodule classification,where the supervision is generated from noisy input images.More specifically,the proposed SN2N can construct the supervision infor-mation from its neighbors for LDCT denoising,which does not need NDCT images anymore.The proposed SN2N method enables joint training of LDCT denoising and lung nodule classification tasks by using self-supervised loss for denoising and cross-entropy loss for classification.Extensively experimental results on the Mayo LDCT dataset demonstrate that our SN2N achieves competitive performance compared with the supervised learning methods that have paired NDCT images as supervision.Moreover,our results on the LIDC-IDRI dataset show that the joint training of LDCT denoising and lung nodule classification significantly improves the performance of LDCT-based lung nodule classification.展开更多
文摘Medical image classification is crucial in disease diagnosis,treatment planning,and clinical decisionmaking.We introduced a novel medical image classification approach that integrates Bayesian Random Semantic Data Augmentation(BSDA)with a Vision Mamba-based model for medical image classification(MedMamba),enhanced by residual connection blocks,we named the model BSDA-Mamba.BSDA augments medical image data semantically,enhancing the model’s generalization ability and classification performance.MedMamba,a deep learning-based state space model,excels in capturing long-range dependencies in medical images.By incorporating residual connections,BSDA-Mamba further improves feature extraction capabilities.Through comprehensive experiments on eight medical image datasets,we demonstrate that BSDA-Mamba outperforms existing models in accuracy,area under the curve,and F1-score.Our results highlight BSDA-Mamba’s potential as a reliable tool for medical image analysis,particularly in handling diverse imaging modalities from X-rays to MRI.The open-sourcing of our model’s code and datasets,will facilitate the reproduction and extension of our work.
基金Major Program of National Natural Science Foundation of China(NSFC12292980,NSFC12292984)National Key R&D Program of China(2023YFA1009000,2023YFA1009004,2020YFA0712203,2020YFA0712201)+2 种基金Major Program of National Natural Science Foundation of China(NSFC12031016)Beijing Natural Science Foundation(BNSFZ210003)Department of Science,Technology and Information of the Ministry of Education(8091B042240).
文摘Gliomas have the highest mortality rate of all brain tumors.Correctly classifying the glioma risk period can help doctors make reasonable treatment plans and improve patients’survival rates.This paper proposes a hierarchical multi-scale attention feature fusion medical image classification network(HMAC-Net),which effectively combines global features and local features.The network framework consists of three parallel layers:The global feature extraction layer,the local feature extraction layer,and the multi-scale feature fusion layer.A linear sparse attention mechanism is designed in the global feature extraction layer to reduce information redundancy.In the local feature extraction layer,a bilateral local attention mechanism is introduced to improve the extraction of relevant information between adjacent slices.In the multi-scale feature fusion layer,a channel fusion block combining convolutional attention mechanism and residual inverse multi-layer perceptron is proposed to prevent gradient disappearance and network degradation and improve feature representation capability.The double-branch iterative multi-scale classification block is used to improve the classification performance.On the brain glioma risk grading dataset,the results of the ablation experiment and comparison experiment show that the proposed HMAC-Net has the best performance in both qualitative analysis of heat maps and quantitative analysis of evaluation indicators.On the dataset of skin cancer classification,the generalization experiment results show that the proposed HMAC-Net has a good generalization effect.
基金This work was supported in part by the Natural Science Foundation of Shanghai(21ZR1403600)the National Natural Science Foundation of China(62176059)+3 种基金Shanghai Municipal Science and Technology Major Project(2018SHZDZX01)Zhang Jiang Laboratory,Shanghai Sailing Program(21YF1402800)Shanghai Municipal of Science and Technology Project(20JC1419500)Shanghai Center for Brain Science and Brain-inspired Technology.
文摘The performance of medical image classification has been enhanced by deep convolutional neural networks(CNNs),which are typically trained with cross-entropy(CE)loss.However,when the label presents an intrinsic ordinal property in nature,e.g.,the development from benign to malignant tumor,CE loss cannot take into account such ordinal information to allow for better generalization.To improve model generalization with ordinal information,we propose a novel meta ordinal regression forest(MORF)method for medical image classification with ordinal labels,which learns the ordinal relationship through the combination of convolutional neural network and differential forest in a meta-learning framework.The merits of the proposed MORF come from the following two components:A tree-wise weighting net(TWW-Net)and a grouped feature selection(GFS)module.First,the TWW-Net assigns each tree in the forest with a specific weight that is mapped from the classification loss of the corresponding tree.Hence,all the trees possess varying weights,which is helpful for alleviating the tree-wise prediction variance.Second,the GFS module enables a dynamic forest rather than a fixed one that was previously used,allowing for random feature perturbation.During training,we alternatively optimize the parameters of the CNN backbone and TWW-Net in the meta-learning framework through calculating the Hessian matrix.Experimental results on two medical image classification datasets with ordinal labels,i.e.,LIDC-IDRI and Breast Ultrasound datasets,demonstrate the superior performances of our MORF method over existing state-of-the-art methods.
文摘Medical image classification has played an important role in the medical field, and the related method based on deep learning has become an important and powerful technique in medical image classification. In this article, we propose a simplified inception module based Hadamard attention (SI + HA) mechanism for medical image classification. Specifically, we propose a new attention mechanism: Hadamard attention mechanism. It improves the accuracy of medical image classification without greatly increasing the complexity of the model. Meanwhile, we adopt a simplified inception module to improve the utilization of parameters. We use two medical image datasets to prove the superiority of our proposed method. In the BreakHis dataset, the AUCs of our method can reach 98.74%, 98.38%, 98.61% and 97.67% under the magnification factors of 40×, 100×, 200× and 400×, respectively. The accuracies can reach 95.67%, 94.17%, 94.53% and 94.12% under the magnification factors of 40×, 100×, 200× and 400×, respectively. In the KIMIA Path 960 dataset, the AUCs and accuracy of our method can reach 99.91% and 99.03%. It is superior to the currently popular methods and can significantly improve the effectiveness of medical image classification.
基金supported by the Joint Fund of the Ministry of Education for Equipment Pre-research(No.8091B0203)National Key Research and Development Program of China(No.2020YFC2008700)。
文摘Computer-aided diagnosis(CAD)can detect tuberculosis(TB)cases,providing radiologists with more accurate and efficient diagnostic solutions.Various noise information in TB chest X-ray(CXR)images is a major challenge in this classification task.This study aims to propose a model with high performance in TB CXR image detection named multi-scale input mirror network(MIM-Net)based on CXR image symmetry,which consists of a multi-scale input feature extraction network and mirror loss.The multi-scale image input can enhance feature extraction,while the mirror loss can improve the network performance through self-supervision.We used a publicly available TB CXR image classification dataset to evaluate our proposed method via 5-fold cross-validation,with accuracy,sensitivity,specificity,positive predictive value,negative predictive value,and area under curve(AUC)of 99.67%,100%,99.60%,99.80%,100%,and 0.9999,respectively.Compared to other models,MIM-Net performed best in all metrics.Therefore,the proposed MIM-Net can effectively help the network learn more features and can be used to detect TB in CXR images,thus assisting doctors in diagnosing.
基金funded by Taif University,Saudi ArabiaThe author would like to acknowledge Deanship of Graduate Studies and Scientific Research,Taif University for funding this work.
文摘The evolving field of Alzheimer’s disease(AD)diagnosis has greatly benefited from deep learning models for analyzing brain magnetic resonance(MR)images.This study introduces Dynamic GradNet,a novel deep learning model designed to increase diagnostic accuracy and interpretability for multiclass AD classification.Initially,four state-of-the-art convolutional neural network(CNN)architectures,the self-regulated network(RegNet),residual network(ResNet),densely connected convolutional network(DenseNet),and efficient network(EfficientNet),were comprehensively compared via a unified preprocessing pipeline to ensure a fair evaluation.Among these models,EfficientNet consistently demonstrated superior performance in terms of accuracy,precision,recall,and F1 score.As a result,EfficientNetwas selected as the foundation for implementing Dynamic GradNet.Dynamic GradNet incorporates gradient weighted class activation mapping(GradCAM)into the training process,facilitating dynamic adjustments that focus on critical brain regions associated with early dementia detection.These adjustments are particularly effective in identifying subtle changes associated with very mild dementia,enabling early diagnosis and intervention.The model was evaluated with the OASIS dataset,which contains greater than 80,000 brain MR images categorized into four distinct stages of AD progression.The proposed model outperformed the baseline architectures,achieving remarkable generalizability across all stages.This findingwas especially evident in early-stage dementia detection,where Dynamic GradNet significantly reduced false positives and enhanced classification metrics.These findings highlight the potential of Dynamic GradNet as a robust and scalable approach for AD diagnosis,providing a promising alternative to traditional attention-based models.The model’s ability to dynamically adjust spatial focus offers a powerful tool in artificial intelligence(AI)assisted precisionmedicine,particularly in the early detection of neurodegenerative diseases.
文摘Alzheimer’s disease(AD)is a significant challenge in modern healthcare,with early detection and accurate staging remaining critical priorities for effective intervention.While Deep Learning(DL)approaches have shown promise in AD diagnosis,existing methods often struggle with the issues of precision,interpretability,and class imbalance.This study presents a novel framework that integrates DL with several eXplainable Artificial Intelligence(XAI)techniques,in particular attention mechanisms,Gradient-Weighted Class Activation Mapping(Grad-CAM),and Local Interpretable Model-Agnostic Explanations(LIME),to improve bothmodel interpretability and feature selection.The study evaluates four different DL architectures(ResMLP,VGG16,Xception,and Convolutional Neural Network(CNN)with attention mechanism)on a balanced dataset of 3714 MRI brain scans from patients aged 70 and older.The proposed CNN with attention model achieved superior performance,demonstrating 99.18%accuracy on the primary dataset and 96.64% accuracy on the ADNI dataset,significantly advancing the state-of-the-art in AD classification.The ability of the framework to provide comprehensive,interpretable results through multiple visualization techniques while maintaining high classification accuracy represents a significant advancement in the computational diagnosis of AD,potentially enabling more accurate and earlier intervention in clinical settings.
文摘The major mortality factor relevant to the intestinal tract is the growth of tumorous cells(polyps)in various parts.More specifically,colonic polyps have a high rate and are recognized as a precursor of colon cancer growth.Endoscopy is the conventional technique for detecting colon polyps,and considerable research has proved that automated diagnosis of image regions that might have polyps within the colon might be used to help experts for decreasing the polyp miss rate.The automated diagnosis of polyps in a computer-aided diagnosis(CAD)method is implemented using statistical analysis.Nowadays,Deep Learning,particularly throughConvolution Neural networks(CNN),is broadly employed to allowthe extraction of representative features.This manuscript devises a new Northern Goshawk Optimization with Transfer Learning Model for Colonic Polyp Detection and Classification(NGOTL-CPDC)model.The NGOTL-CPDC technique aims to investigate endoscopic images for automated colonic polyp detection.To accomplish this,the NGOTL-CPDC technique comprises of adaptive bilateral filtering(ABF)technique as a noise removal process and image pre-processing step.Besides,the NGOTL-CPDC model applies the Faster SqueezeNet model for feature extraction purposes in which the hyperparameter tuning process is performed using the NGO optimizer.Finally,the fuzzy Hopfield neural network(FHNN)method can be employed for colonic poly detection and classification.A widespread simulation analysis is carried out to ensure the improved outcomes of the NGOTL-CPDC model.The comparison study demonstrates the enhancements of the NGOTL-CPDC model on the colonic polyp classification process on medical test images.
基金supported in part by National Natural Science Foundation of China(No.62101136)Shanghai Municipal of Science and Technology Project(No.20JC1419500)+3 种基金Shanghai Sailing Program(No.21YF1402800)the Shanghai Municipal Science and Technology Major Project(No.2018SHZDZX01)ZJLab,Shanghai Center for Brain Science and Brain-Inspired Technology,the National Key R&D Program of China(No.2018YFB1305104)the Natural Science Foundation of Shanghai(No.21ZR1403600).
文摘Lung nodule classification based on low-dose computed tomography(LDCT)images has attracted major attention thanks to the reduced radiation dose and its potential for early diagnosis of lung cancer from LDCT-based lung cancer screening.However,LDCT images suffer from severe noise,largely influencing the performance of lung nodule classification.Current methods combining denoising and classification tasks typically require the corresponding normal-dose CT(NDCT)images as the supervision for the denoising task,which is impractical in the context of clinical diagnosis using LDCT.To jointly train these two tasks in a unified framework without the NDCT images,this paper introduces a novel self-supervised method,termed strided Noise2Neighbors or SN2N,for blind medical image denoising and lung nodule classification,where the supervision is generated from noisy input images.More specifically,the proposed SN2N can construct the supervision infor-mation from its neighbors for LDCT denoising,which does not need NDCT images anymore.The proposed SN2N method enables joint training of LDCT denoising and lung nodule classification tasks by using self-supervised loss for denoising and cross-entropy loss for classification.Extensively experimental results on the Mayo LDCT dataset demonstrate that our SN2N achieves competitive performance compared with the supervised learning methods that have paired NDCT images as supervision.Moreover,our results on the LIDC-IDRI dataset show that the joint training of LDCT denoising and lung nodule classification significantly improves the performance of LDCT-based lung nodule classification.