BACKGROUND Computer-aided diagnosis(CAD)may assist endoscopists in identifying and classifying polyps during colonoscopy for detecting colorectal cancer.AIM To build a system using CAD to detect and classify polyps ba...BACKGROUND Computer-aided diagnosis(CAD)may assist endoscopists in identifying and classifying polyps during colonoscopy for detecting colorectal cancer.AIM To build a system using CAD to detect and classify polyps based on the Yamada classification.METHODS A total of 24045 polyp and 72367 nonpolyp images were obtained.We established a computer-aided detection and Yamada classification model based on the YOLOv7 neural network algorithm.Frame-based and image-based evaluation metrics were employed to assess the performance.RESULTS Computer-aided detection and Yamada classification screened polyps with a precision of 96.7%,a recall of 95.8%,and an F1-score of 96.2%,outperforming those of all groups of endoscopists.In regard to the Yamada classification of polyps,the CAD system displayed a precision of 82.3%,a recall of 78.5%,and an F1-score of 80.2%,outper-forming all levels of endoscopists.In addition,according to the image-based method,the CAD had an accuracy of 99.2%,a specificity of 99.5%,a sensitivity of 98.5%,a positive predictive value of 99.0%,a negative predictive value of 99.2%for polyp detection and an accuracy of 97.2%,a specificity of 98.4%,a sensitivity of 79.2%,a positive predictive value of 83.0%,and a negative predictive value of 98.4%for poly Yamada classification.CONCLUSION We developed a novel CAD system based on a deep neural network for polyp detection,and the Yamada classi-fication outperformed that of nonexpert endoscopists.This CAD system could help community-based hospitals enhance their effectiveness in polyp detection and classification.展开更多
Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronar...Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.展开更多
Melanoma is the deadliest form of skin cancer,with an increasing incidence over recent years.Over the past decade,researchers have recognized the potential of computer vision algorithms to aid in the early diagnosis o...Melanoma is the deadliest form of skin cancer,with an increasing incidence over recent years.Over the past decade,researchers have recognized the potential of computer vision algorithms to aid in the early diagnosis of melanoma.As a result,a number of works have been dedicated to developing efficient machine learning models for its accurate classification;still,there remains a large window for improvement necessitating further research efforts.Limitations of the existing methods include lower accuracy and high computational complexity,which may be addressed by identifying and selecting the most discriminative features to improve classification accuracy.In this work,we apply transfer learning to a Nasnet-Mobile CNN model to extract deep features and augment it with a novel nature-inspired feature selection algorithm called Mutated Binary Artificial Bee Colony.The selected features are fed to multiple classifiers for final classification.We use PH2,ISIC-2016,and HAM10000 datasets for experimentation,supported by Monte Carlo simulations for thoroughly evaluating the proposed feature selection mechanism.We carry out a detailed comparison with various benchmark works in terms of convergence rate,accuracy histogram,and reduction percentage histogram,where our method reports 99.15%(2-class)and 97.5%(3-class)accuracy on the PH^(2) dataset,while 96.12%and 94.1%accuracy for the other two datasets,respectively,against minimal features.展开更多
Purpose: Surgical templates produced by digital simulation and CAD/CAM allow for three-dimensional control of implant placement. However, due to clinical limitations, there are complications during the use of the temp...Purpose: Surgical templates produced by digital simulation and CAD/CAM allow for three-dimensional control of implant placement. However, due to clinical limitations, there are complications during the use of the template. The purpose of this study was to summarize the complications associated with the use of surgical templates for static computer-aided implant surgery. Methods: Complications were collected during the observation period, and then their implant sites were reanalyzed with simulation software. Results: There were 104 cases during the observation period, 5 cases had complications. Mechanical complications were observed in four cases, including three cases in which the frame of the template fractured during implant placement surgery and one case in which the sleeve fell off the surgical template. In one case, there was an error in the planned position. All cases were mandibular molar cases, and all cases of frame fracture were at the free end defect site. All cases had a Hounsfield unit of more than 700 at the implant site, and some of them had a significantly small jaw opening. Conclusion: Although the spread of CAD/CAM surgical templates has made it possible to avoid problems caused by the position of the implant, it has been difficult to avoid fractures in cases of mandibular free end defects with high Hounsfield unit.展开更多
AIM:To support probe-based confocal laser endomi-croscopy (pCLE) diagnosis by designing software for the automated classification of colonic polyps. METHODS:Intravenous fluorescein pCLE imaging of colorectal lesions w...AIM:To support probe-based confocal laser endomi-croscopy (pCLE) diagnosis by designing software for the automated classification of colonic polyps. METHODS:Intravenous fluorescein pCLE imaging of colorectal lesions was performed on patients under-going screening and surveillance colonoscopies, followed by polypectomies. All resected specimens were reviewed by a reference gastrointestinal pathologist blinded to pCLE information. Histopathology was used as the criterion standard for the differentiation between neoplastic and non-neoplastic lesions. The pCLE video sequences, recorded for each polyp, were analyzed off-line by 2 expert endoscopists who were blinded to the endoscopic characteristics and histopathology. These pCLE videos, along with their histopathology diagnosis, were used to train the automated classification software which is a content-based image retrieval technique followed by k-nearest neighbor classification. The performance of the off-line diagnosis of pCLE videos established by the 2 expert endoscopists was compared with that of automated pCLE software classification. All evaluations were performed using leave-one-patient- out cross-validation to avoid bias. RESULTS:Colorectal lesions (135) were imaged in 71 patients. Based on histopathology, 93 of these 135 lesions were neoplastic and 42 were non-neoplastic. The study found no statistical significance for the difference between the performance of automated pCLE software classification (accuracy 89.6%, sensitivity 92.5%, specificity 83.3%, using leave-one-patient-out cross-validation) and the performance of the off-line diagnosis of pCLE videos established by the 2 expert endoscopists (accuracy 89.6%, sensitivity 91.4%, specificity 85.7%). There was very low power (< 6%) to detect the observed differences. The 95% confidence intervals for equivalence testing were:-0.073 to 0.073 for accuracy, -0.068 to 0.089 for sensitivity and -0.18 to 0.13 for specificity. The classification software proposed in this study is not a "black box" but an informative tool based on the query by example model that produces, as intermediate results, visually similar annotated videos that are directly interpretable by the endoscopist. CONCLUSION:The proposed software for automated classification of pCLE videos of colonic polyps achieves high performance, comparable to that of off-line diagnosis of pCLE videos established by expert endoscopists.展开更多
The major mortality factor relevant to the intestinal tract is the growth of tumorous cells(polyps)in various parts.More specifically,colonic polyps have a high rate and are recognized as a precursor of colon cancer g...The major mortality factor relevant to the intestinal tract is the growth of tumorous cells(polyps)in various parts.More specifically,colonic polyps have a high rate and are recognized as a precursor of colon cancer growth.Endoscopy is the conventional technique for detecting colon polyps,and considerable research has proved that automated diagnosis of image regions that might have polyps within the colon might be used to help experts for decreasing the polyp miss rate.The automated diagnosis of polyps in a computer-aided diagnosis(CAD)method is implemented using statistical analysis.Nowadays,Deep Learning,particularly throughConvolution Neural networks(CNN),is broadly employed to allowthe extraction of representative features.This manuscript devises a new Northern Goshawk Optimization with Transfer Learning Model for Colonic Polyp Detection and Classification(NGOTL-CPDC)model.The NGOTL-CPDC technique aims to investigate endoscopic images for automated colonic polyp detection.To accomplish this,the NGOTL-CPDC technique comprises of adaptive bilateral filtering(ABF)technique as a noise removal process and image pre-processing step.Besides,the NGOTL-CPDC model applies the Faster SqueezeNet model for feature extraction purposes in which the hyperparameter tuning process is performed using the NGO optimizer.Finally,the fuzzy Hopfield neural network(FHNN)method can be employed for colonic poly detection and classification.A widespread simulation analysis is carried out to ensure the improved outcomes of the NGOTL-CPDC model.The comparison study demonstrates the enhancements of the NGOTL-CPDC model on the colonic polyp classification process on medical test images.展开更多
Osteosarcoma is one of the most widespread causes of bone cancer globally and has a high mortality rate.Early diagnosis may increase the chances of treatment and survival however the process is time-consuming(reliabil...Osteosarcoma is one of the most widespread causes of bone cancer globally and has a high mortality rate.Early diagnosis may increase the chances of treatment and survival however the process is time-consuming(reliability and complexity involved to extract the hand-crafted features)and largely depends on pathologists’experience.Convolutional Neural Network(CNN—an end-to-end model)is known to be an alternative to overcome the aforesaid problems.Therefore,this work proposes a compact CNN architecture that has been rigorously explored on a Small Osteosarcoma histology Image Dataaseet(a high-class imbalanced dataset).Though,during training,class-imbalanced data can negatively affect the performance of CNN.Therefore,an oversampling technique has been proposed to overcome the aforesaid issue and improve generalization performance.In this process,a hierarchical CNN model is designed,in which the former model is non-regularized(due to dense architecture)and the later one is regularized,specifically designed for small histopathology images.Moreover,the regularized model is integrated with CNN’s basic architecture to reduce overfitting.Experimental results demonstrate that oversampling might be an effective way to address the imbalanced class problem during training.The training and testing accuracies of the non-regularized CNN model are 98%&78%with an imbalanced dataset and 96%&81%with a balanced dataset,respectively.The regularized CNN model training and testing accuracies are 84%&75%for an imbalanced dataset and 87%&86%for a balanced dataset.展开更多
The general computer-aided design (CAD) software cannot meet the mould design requirement of the autoclave process for composites, because many parameters such as temperature and pressure should be considered in the...The general computer-aided design (CAD) software cannot meet the mould design requirement of the autoclave process for composites, because many parameters such as temperature and pressure should be considered in the mould design process, in addition to the material and geometry of the part. A framed-mould computer-aided design system (FMCAD) used in the autoclave moulding process is proposed in this paper. A function model of the software is presented, in which influence factors such as part structure, mould structure, and process parameters are considered; a design model of the software is established using object oriented (O-O) technology to integrate the stiffness calculation, temperature field calculation, and deformation field calculation of mould in the design, and in the design model, a hybrid model of mould based on calculation feature and form feature is presented to support those calculations. A prototype system is developed, in which a mould design process wizard is built to integrate the input information, calculation, analysis, data storage, display, and design results of mould design. Finally, three design examples are used to verify the prototype.展开更多
Diabetic Retinopathy(DR)is a significant blinding disease that poses serious threat to human vision rapidly.Classification and severity grading of DR are difficult processes to accomplish.Traditionally,it depends on o...Diabetic Retinopathy(DR)is a significant blinding disease that poses serious threat to human vision rapidly.Classification and severity grading of DR are difficult processes to accomplish.Traditionally,it depends on ophthalmoscopically-visible symptoms of growing severity,which is then ranked in a stepwise scale from no retinopathy to various levels of DR severity.This paper presents an ensemble of Orthogonal Learning Particle Swarm Optimization(OPSO)algorithm-based Convolutional Neural Network(CNN)Model EOPSO-CNN in order to perform DR detection and grading.The proposed EOPSO-CNN model involves three main processes such as preprocessing,feature extraction,and classification.The proposed model initially involves preprocessing stage which removes the presence of noise in the input image.Then,the watershed algorithm is applied to segment the preprocessed images.Followed by,feature extraction takes place by leveraging EOPSO-CNN model.Finally,the extracted feature vectors are provided to a Decision Tree(DT)classifier to classify the DR images.The study experiments were carried out using Messidor DR Dataset and the results showed an extraordinary performance by the proposed method over compared methods in a considerable way.The simulation outcome offered the maximum classification with accuracy,sensitivity,and specificity values being 98.47%,96.43%,and 99.02%respectively.展开更多
Osteoporotic Vertebral Fracture(OVFs)is a common lumbar spine disorder that severely affects the health of patients.With a clear bone blocks boundary,CT images have gained obvious advantages in OVFs diagnosis.Compared...Osteoporotic Vertebral Fracture(OVFs)is a common lumbar spine disorder that severely affects the health of patients.With a clear bone blocks boundary,CT images have gained obvious advantages in OVFs diagnosis.Compared with CT images,X-rays are faster and more inexpensive but often leads to misdiagnosis and miss-diagnosis because of the overlapping shadows.Considering how to transfer CT imaging advantages to achieve OVFs classification in X-rays is meaningful.For this purpose,we propose a multi-modal semantic consistency network which could do well X-ray OVFs classification by transferring CT semantic consistency features.Different from existing methods,we introduce a feature-level mix-up module to get the domain soft labels which helps the network reduce the domain offsets between CT and X-ray.In the meanwhile,the network uses a self-rotation pretext task on both CT and X-ray domains to enhance learning the high-level semantic invariant features.We employ five evaluation metrics to compare the proposed method with the state-of-the-art methods.The final results show that our method improves the best value of AUC from 86.32 to 92.16%.The results indicate that multi-modal semantic consistency method could use CT imaging features to improve osteoporotic vertebral fracture classification in X-rays effectively.展开更多
CT colonography (CTC) is a non-invasive screening technique for the detection of eolorectal polyps, as an alternative to optical colonoscopy in clinical practice. Computer-aided detection (CAD) for CTC refers to a...CT colonography (CTC) is a non-invasive screening technique for the detection of eolorectal polyps, as an alternative to optical colonoscopy in clinical practice. Computer-aided detection (CAD) for CTC refers to a scheme which automatically detects colorectal polyps and masses in CT images of the colon. It has the potential to increase radiologists' detection performance and greatly shorten the detection time. Over the years, technical developments have advanced CAD for CTC substantially. In this paper, key techniques used in CAD for polyp detection are reviewed. Illustrations about the performance of existing CAD schemes show their relatively high sensitivity and low false positive rate. However, these CAD schemes are still suffering from technical or clinical problems. Some existing challenges faced by CAD are also pointed out at the end of this paper.展开更多
Alzheimer’s disease (AD) is a dementing disorder and one of the major public health problems in countries with greater longevity. The cerebral cortical thickness and cerebral blood flow (CBF), which are considered as...Alzheimer’s disease (AD) is a dementing disorder and one of the major public health problems in countries with greater longevity. The cerebral cortical thickness and cerebral blood flow (CBF), which are considered as morphological and functional image features, respectively, could be decreased in specific cerebral regions of patients with dementia of Alzheimer type. Therefore, the aim of this study was to develop a computer-aided classification system for AD patients based on machine learning with the morphological and functional image features derived from a magnetic resonance (MR) imaging system. The cortical thicknesses in ten cerebral regions were derived as morphological features by using gradient vector trajectories in fuzzy membership images. Functional CBF maps were measured with an arterial spin labeling technique, and ten regional CBF values were obtained by registration between the CBF map and Talairach atlas using an affine transformation and a free form deformation. We applied two systems based on an arterial neural network (ANN) and a support vector machine (SVM), which were trained with 4 morphological and 6 functional image features, to 15 AD patients and 15 clinically normal (CN) subjects for classification of AD. The area under the receiver operating characteristic curve (AUC) values for the two systems based on the ANN and SVM with both image?features were 0.901 and 0.915, respectively. The AUC values for the ANN-and SVM-based systems with the morphological features were 0.710 and 0.660, respectively, and those with the functional features were 0.878 and 0.903, respectively. Our preliminary results suggest that the proposed method may have potential for assisting radiologists in the differential diagnosis of AD patients by using morphological and functional image features.展开更多
Breast Cancer(BC)is considered the most commonly scrutinized can-cer in women worldwide,affecting one in eight women in a lifetime.Mammogra-phy screening becomes one such standard method that is helpful in identifying...Breast Cancer(BC)is considered the most commonly scrutinized can-cer in women worldwide,affecting one in eight women in a lifetime.Mammogra-phy screening becomes one such standard method that is helpful in identifying suspicious masses’malignancy of BC at an initial level.However,the prior iden-tification of masses in mammograms was still challenging for extremely dense and dense breast categories and needs an effective and automatic mechanisms for helping radiotherapists in diagnosis.Deep learning(DL)techniques were broadly utilized for medical imaging applications,particularly breast mass classi-fication.The advancements in the DL field paved the way for highly intellectual and self-reliant computer-aided diagnosis(CAD)systems since the learning cap-ability of Machine Learning(ML)techniques was constantly improving.This paper presents a new Hyperparameter Tuned Deep Hybrid Denoising Autoenco-der Breast Cancer Classification(HTDHDAE-BCC)on Digital Mammograms.The presented HTDHDAE-BCC model examines the mammogram images for the identification of BC.In the HTDHDAE-BCC model,the initial stage of image preprocessing is carried out using an average median filter.In addition,the deep convolutional neural network-based Inception v4 model is employed to generate feature vectors.The parameter tuning process uses the binary spider monkey opti-mization(BSMO)algorithm.The HTDHDAE-BCC model exploits chameleon swarm optimization(CSO)with the DHDAE model for BC classification.The experimental analysis of the HTDHDAE-BCC model is performed using the MIAS database.The experimental outcomes demonstrate the betterments of the HTDHDAE-BCC model over other recent approaches.展开更多
With the rapid increase of new cases with an increased mortality rate,cancer is considered the second and most deadly disease globally.Breast cancer is the most widely affected cancer worldwide,with an increased death...With the rapid increase of new cases with an increased mortality rate,cancer is considered the second and most deadly disease globally.Breast cancer is the most widely affected cancer worldwide,with an increased death rate percentage.Due to radiologists’processing of mammogram images,many computer-aided diagnoses have been developed to detect breast cancer.Early detection of breast cancer will reduce the death rate worldwide.The early diagnosis of breast cancer using the developed computer-aided diagnosis(CAD)systems still needed to be enhanced by incorporating innovative deep learning technologies to improve the accuracy and sensitivity of the detection system with a reduced false positive rate.This paper proposed an efficient and optimized deep learning-based feature selection approach with this consideration.This model selects the relevant features from the mammogram images that can improve the accuracy of malignant detection and reduce the false alarm rate.Transfer learning is used in the extraction of features initially.Na ext,a convolution neural network,is used to extract the features.The two feature vectors are fused and optimized with enhanced Butterfly Optimization with Gaussian function(TL-CNN-EBOG)to select the final most relevant features.The optimized features are applied to the classifier called Deep belief network(DBN)to classify the benign and malignant images.The feature extraction and classification process used two datasets,breast,and MIAS.Compared to the existing methods,the optimized deep learning-based model secured 98.6%of improved accuracy on the breast dataset and 98.85%of improved accuracy on the MIAS dataset.展开更多
More than 500,000 patients are diagnosed with breast cancer annually.Authorities worldwide reported a death rate of 11.6%in 2018.Breast tumors are considered a fatal disease and primarily affect middle-aged women.Vari...More than 500,000 patients are diagnosed with breast cancer annually.Authorities worldwide reported a death rate of 11.6%in 2018.Breast tumors are considered a fatal disease and primarily affect middle-aged women.Various approaches to identify and classify the disease using different technologies,such as deep learning and image segmentation,have been developed.Some of these methods reach 99%accuracy.However,boosting accuracy remains highly important as patients’lives depend on early diagnosis and specified treatment plans.This paper presents a fully computerized method to detect and categorize tumor masses in the breast using two deep-learning models and a classifier on different datasets.This method specifically uses ResNet50 and AlexNet,convolutional neural networks(CNNs),for deep learning and a K-Nearest-Neighbor(KNN)algorithm to classify data.Various experiments have been conducted on five datasets:the Mammographic Image Analysis Society(MIAS),Breast Cancer Histopathological Annotation and Diagnosis(BreCaHAD),King Abdulaziz University Breast Cancer Mammogram Dataset(KAU-BCMD),Breast Histopathology Images(BHI),and Breast Cancer Histopathological Image Classification(BreakHis).These datasets were used to train,validate,and test the presented method.The obtained results achieved an average of 99.38%accuracy,surpassing other models.Essential performance quantities,including precision,recall,specificity,and F-score,reached 99.71%,99.46%,98.08%,and 99.67%,respectively.These outcomes indicate that the presented method offers essential aid to pathologists diagnosing breast cancer.This study suggests using the implemented algorithm to support physicians in analyzing breast cancer correctly.展开更多
Architectural distortion is an important ultrasonographic indicator of breast cancer. However, it is difficult for clinicians to determine whether a given lesion is malignant because such distortions can be subtle in ...Architectural distortion is an important ultrasonographic indicator of breast cancer. However, it is difficult for clinicians to determine whether a given lesion is malignant because such distortions can be subtle in ultrasonographic images. In this paper, we report on a study to develop a computerized scheme for the histological classification of masses with architectural distortions as a differential diagnosis aid. Our database consisted of 72 ultrasonographic images obtained from 47 patients whose masses had architectural distortions. This included 51 malignant (35 invasive and 16 non-invasive carcinomas) and 21 benign masses. In the proposed method, the location of the masses and the area occupied by them were first determined by an experienced clinician. Fourteen objective features concerning masses with architectural distortions were then extracted automatically by taking into account subjective features commonly used by experienced clinicians to describe such masses. The k-nearest neighbors (k-NN) rule was finally used to distinguish three histological classifications. The proposed method yielded classification accuracy values of 91.4% (32/35) for invasive carcinoma, 75.0% (12/16) for noninvasive carcinoma, and 85.7% (18/21) for benign mass, respectively. The sensitivity and specificity values were 92.2% (47/51) and 85.7% (18/21), respectively. The positive predictive values (PPV) were 88.9% (32/36) for invasive carcinoma and 85.7% (12/14) for noninvasive carcinoma whereas the negative predictive values (NPV) were 81.8% (18/22) for benign mass. Thus, the proposed method can help the differential diagnosis of masses with architectural distortions in ultrasonographic images.展开更多
The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landm...The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landmarks, pathological findings, other anomalies and normal cases, by examining medical endoscopic images of GI tract. Each binary classifier is trained to detect one specific non-healthy condition. The algorithm analyzed in the present work expands the ability of detection of this tool by classifying GI tract image snapshots into two classes, depicting haemorrhage and non-haemorrhage state. The proposed algorithm is the result of the collaboration between interdisciplinary specialists on AI and Data Analysis, Computer Vision, Gastroenterologists of four University Gastroenterology Departments of Greek Medical Schools. The data used are 195 videos (177 from non-healthy cases and 18 from healthy cases) videos captured from the PillCam<sup>(R)</sup> Medronics device, originated from 195 patients, all diagnosed with different forms of angioectasia, haemorrhages and other diseases from different sites of the gastrointestinal (GI), mainly including difficult cases of diagnosis. Our AI algorithm is based on convolutional neural network (CNN) trained on annotated images at image level, using a semantic tag indicating whether the image contains angioectasia and haemorrhage traces or not. At least 22 CNN architectures were created and evaluated some of which pre-trained applying transfer learning on ImageNet data. All the CNN variations were introduced, trained to a prevalence dataset of 50%, and evaluated of unseen data. On test data, the best results were obtained from our CNN architectures which do not utilize backbone of transfer learning. Across a balanced dataset from no-healthy images and healthy images from 39 videos from different patients, identified correct diagnosis with sensitivity 90%, specificity 92%, precision 91.8%, FPR 8%, FNR 10%. Besides, we compared the performance of our best CNN algorithm versus our same goal algorithm based on HSV colorimetric lesions features extracted of pixel-level annotations, both algorithms trained and tested on the same data. It is evaluated that the CNN trained on image level annotated images, is 9% less sensitive, achieves 2.6% less precision, 1.2% less FPR, and 7% less FNR, than that based on HSV filters, extracted from on pixel-level annotated training data.展开更多
The novel coronavirus disease,or COVID-19,is a hazardous disease.It is endangering the lives of many people living in more than two hundred countries.It directly affects the lungs.In general,two main imaging modalitie...The novel coronavirus disease,or COVID-19,is a hazardous disease.It is endangering the lives of many people living in more than two hundred countries.It directly affects the lungs.In general,two main imaging modalities,i.e.,computed tomography(CT)and chest x-ray(CXR)are used to achieve a speedy and reliable medical diagnosis.Identifying the coronavirus in medical images is exceedingly difficult for diagnosis,assessment,and treatment.It is demanding,time-consuming,and subject to human mistakes.In biological disciplines,excellent performance can be achieved by employing artificial intelligence(AI)models.As a subfield of AI,deep learning(DL)networks have drawn considerable attention than standard machine learning(ML)methods.DL models automatically carry out all the steps of feature extraction,feature selection,and classification.This study has performed comprehensive analysis of coronavirus classification using CXR and CT imaging modalities using DL architectures.Additionally,we have discussed how transfer learning is helpful in this regard.Finally,the problem of designing and implementing a system using computer-aided diagnostic(CAD)to find COVID-19 using DL approaches highlighted a future research possibility.展开更多
In this paper,we introduce an innovative method for computer-aided design(CAD)segmentation by concatenating meshes and CAD models.Many previous CAD segmentation methods have achieved impressive performance using singl...In this paper,we introduce an innovative method for computer-aided design(CAD)segmentation by concatenating meshes and CAD models.Many previous CAD segmentation methods have achieved impressive performance using single representations,such as meshes,CAD,and point clouds.However,existing methods cannot effectively combine different three-dimensional model types for the direct conversion,alignment,and integrity maintenance of geometric and topological information.Hence,we propose an integration approach that combines the geometric accuracy of CAD data with the flexibility of mesh representations,as well as introduce a unique hybrid representation that combines CAD and mesh models to enhance segmentation accuracy.To combine these two model types,our hybrid system utilizes advanced-neural-network techniques to convert CAD models into mesh models.For complex CAD models,model segmentation is crucial for model retrieval and reuse.In partial retrieval,it aims to segment a complex CAD model into several simple components.The first component of our hybrid system involves advanced mesh-labeling algorithms that harness the digitization of CAD properties to mesh models.The second component integrates labelled face features for CAD segmentation by leveraging the abundant multisemantic information embedded in CAD models.This combination of mesh and CAD not only refines the accuracy of boundary delineation but also provides a comprehensive understanding of the underlying object semantics.This study uses the Fusion 360 Gallery dataset.Experimental results indicate that our hybrid method can segment these models with higher accuracy than other methods that use single representations.展开更多
基金Supported by Science and Technology Projects in Guangzhou,No.2023A04J2282。
文摘BACKGROUND Computer-aided diagnosis(CAD)may assist endoscopists in identifying and classifying polyps during colonoscopy for detecting colorectal cancer.AIM To build a system using CAD to detect and classify polyps based on the Yamada classification.METHODS A total of 24045 polyp and 72367 nonpolyp images were obtained.We established a computer-aided detection and Yamada classification model based on the YOLOv7 neural network algorithm.Frame-based and image-based evaluation metrics were employed to assess the performance.RESULTS Computer-aided detection and Yamada classification screened polyps with a precision of 96.7%,a recall of 95.8%,and an F1-score of 96.2%,outperforming those of all groups of endoscopists.In regard to the Yamada classification of polyps,the CAD system displayed a precision of 82.3%,a recall of 78.5%,and an F1-score of 80.2%,outper-forming all levels of endoscopists.In addition,according to the image-based method,the CAD had an accuracy of 99.2%,a specificity of 99.5%,a sensitivity of 98.5%,a positive predictive value of 99.0%,a negative predictive value of 99.2%for polyp detection and an accuracy of 97.2%,a specificity of 98.4%,a sensitivity of 79.2%,a positive predictive value of 83.0%,and a negative predictive value of 98.4%for poly Yamada classification.CONCLUSION We developed a novel CAD system based on a deep neural network for polyp detection,and the Yamada classi-fication outperformed that of nonexpert endoscopists.This CAD system could help community-based hospitals enhance their effectiveness in polyp detection and classification.
基金the Research Grant of Kwangwoon University in 2024.
文摘Myocardial perfusion imaging(MPI),which uses single-photon emission computed tomography(SPECT),is a well-known estimating tool for medical diagnosis,employing the classification of images to show situations in coronary artery disease(CAD).The automatic classification of SPECT images for different techniques has achieved near-optimal accuracy when using convolutional neural networks(CNNs).This paper uses a SPECT classification framework with three steps:1)Image denoising,2)Attenuation correction,and 3)Image classification.Image denoising is done by a U-Net architecture that ensures effective image denoising.Attenuation correction is implemented by a convolution neural network model that can remove the attenuation that affects the feature extraction process of classification.Finally,a novel multi-scale diluted convolution(MSDC)network is proposed.It merges the features extracted in different scales and makes the model learn the features more efficiently.Three scales of filters with size 3×3 are used to extract features.All three steps are compared with state-of-the-art methods.The proposed denoising architecture ensures a high-quality image with the highest peak signal-to-noise ratio(PSNR)value of 39.7.The proposed classification method is compared with the five different CNN models,and the proposed method ensures better classification with an accuracy of 96%,precision of 87%,sensitivity of 87%,specificity of 89%,and F1-score of 87%.To demonstrate the importance of preprocessing,the classification model was analyzed without denoising and attenuation correction.
基金Prince Sattam bin Abdulaziz University for funding this research work through the project number(PSAU/2024/03/31540).
文摘Melanoma is the deadliest form of skin cancer,with an increasing incidence over recent years.Over the past decade,researchers have recognized the potential of computer vision algorithms to aid in the early diagnosis of melanoma.As a result,a number of works have been dedicated to developing efficient machine learning models for its accurate classification;still,there remains a large window for improvement necessitating further research efforts.Limitations of the existing methods include lower accuracy and high computational complexity,which may be addressed by identifying and selecting the most discriminative features to improve classification accuracy.In this work,we apply transfer learning to a Nasnet-Mobile CNN model to extract deep features and augment it with a novel nature-inspired feature selection algorithm called Mutated Binary Artificial Bee Colony.The selected features are fed to multiple classifiers for final classification.We use PH2,ISIC-2016,and HAM10000 datasets for experimentation,supported by Monte Carlo simulations for thoroughly evaluating the proposed feature selection mechanism.We carry out a detailed comparison with various benchmark works in terms of convergence rate,accuracy histogram,and reduction percentage histogram,where our method reports 99.15%(2-class)and 97.5%(3-class)accuracy on the PH^(2) dataset,while 96.12%and 94.1%accuracy for the other two datasets,respectively,against minimal features.
文摘Purpose: Surgical templates produced by digital simulation and CAD/CAM allow for three-dimensional control of implant placement. However, due to clinical limitations, there are complications during the use of the template. The purpose of this study was to summarize the complications associated with the use of surgical templates for static computer-aided implant surgery. Methods: Complications were collected during the observation period, and then their implant sites were reanalyzed with simulation software. Results: There were 104 cases during the observation period, 5 cases had complications. Mechanical complications were observed in four cases, including three cases in which the frame of the template fractured during implant placement surgery and one case in which the sleeve fell off the surgical template. In one case, there was an error in the planned position. All cases were mandibular molar cases, and all cases of frame fracture were at the free end defect site. All cases had a Hounsfield unit of more than 700 at the implant site, and some of them had a significantly small jaw opening. Conclusion: Although the spread of CAD/CAM surgical templates has made it possible to avoid problems caused by the position of the implant, it has been difficult to avoid fractures in cases of mandibular free end defects with high Hounsfield unit.
文摘AIM:To support probe-based confocal laser endomi-croscopy (pCLE) diagnosis by designing software for the automated classification of colonic polyps. METHODS:Intravenous fluorescein pCLE imaging of colorectal lesions was performed on patients under-going screening and surveillance colonoscopies, followed by polypectomies. All resected specimens were reviewed by a reference gastrointestinal pathologist blinded to pCLE information. Histopathology was used as the criterion standard for the differentiation between neoplastic and non-neoplastic lesions. The pCLE video sequences, recorded for each polyp, were analyzed off-line by 2 expert endoscopists who were blinded to the endoscopic characteristics and histopathology. These pCLE videos, along with their histopathology diagnosis, were used to train the automated classification software which is a content-based image retrieval technique followed by k-nearest neighbor classification. The performance of the off-line diagnosis of pCLE videos established by the 2 expert endoscopists was compared with that of automated pCLE software classification. All evaluations were performed using leave-one-patient- out cross-validation to avoid bias. RESULTS:Colorectal lesions (135) were imaged in 71 patients. Based on histopathology, 93 of these 135 lesions were neoplastic and 42 were non-neoplastic. The study found no statistical significance for the difference between the performance of automated pCLE software classification (accuracy 89.6%, sensitivity 92.5%, specificity 83.3%, using leave-one-patient-out cross-validation) and the performance of the off-line diagnosis of pCLE videos established by the 2 expert endoscopists (accuracy 89.6%, sensitivity 91.4%, specificity 85.7%). There was very low power (< 6%) to detect the observed differences. The 95% confidence intervals for equivalence testing were:-0.073 to 0.073 for accuracy, -0.068 to 0.089 for sensitivity and -0.18 to 0.13 for specificity. The classification software proposed in this study is not a "black box" but an informative tool based on the query by example model that produces, as intermediate results, visually similar annotated videos that are directly interpretable by the endoscopist. CONCLUSION:The proposed software for automated classification of pCLE videos of colonic polyps achieves high performance, comparable to that of off-line diagnosis of pCLE videos established by expert endoscopists.
文摘The major mortality factor relevant to the intestinal tract is the growth of tumorous cells(polyps)in various parts.More specifically,colonic polyps have a high rate and are recognized as a precursor of colon cancer growth.Endoscopy is the conventional technique for detecting colon polyps,and considerable research has proved that automated diagnosis of image regions that might have polyps within the colon might be used to help experts for decreasing the polyp miss rate.The automated diagnosis of polyps in a computer-aided diagnosis(CAD)method is implemented using statistical analysis.Nowadays,Deep Learning,particularly throughConvolution Neural networks(CNN),is broadly employed to allowthe extraction of representative features.This manuscript devises a new Northern Goshawk Optimization with Transfer Learning Model for Colonic Polyp Detection and Classification(NGOTL-CPDC)model.The NGOTL-CPDC technique aims to investigate endoscopic images for automated colonic polyp detection.To accomplish this,the NGOTL-CPDC technique comprises of adaptive bilateral filtering(ABF)technique as a noise removal process and image pre-processing step.Besides,the NGOTL-CPDC model applies the Faster SqueezeNet model for feature extraction purposes in which the hyperparameter tuning process is performed using the NGO optimizer.Finally,the fuzzy Hopfield neural network(FHNN)method can be employed for colonic poly detection and classification.A widespread simulation analysis is carried out to ensure the improved outcomes of the NGOTL-CPDC model.The comparison study demonstrates the enhancements of the NGOTL-CPDC model on the colonic polyp classification process on medical test images.
基金This research was funded by the Deanship of Scientific Research at Princess Nourah bint Abdulrahman University through the Fast-track Research Funding Program.
文摘Osteosarcoma is one of the most widespread causes of bone cancer globally and has a high mortality rate.Early diagnosis may increase the chances of treatment and survival however the process is time-consuming(reliability and complexity involved to extract the hand-crafted features)and largely depends on pathologists’experience.Convolutional Neural Network(CNN—an end-to-end model)is known to be an alternative to overcome the aforesaid problems.Therefore,this work proposes a compact CNN architecture that has been rigorously explored on a Small Osteosarcoma histology Image Dataaseet(a high-class imbalanced dataset).Though,during training,class-imbalanced data can negatively affect the performance of CNN.Therefore,an oversampling technique has been proposed to overcome the aforesaid issue and improve generalization performance.In this process,a hierarchical CNN model is designed,in which the former model is non-regularized(due to dense architecture)and the later one is regularized,specifically designed for small histopathology images.Moreover,the regularized model is integrated with CNN’s basic architecture to reduce overfitting.Experimental results demonstrate that oversampling might be an effective way to address the imbalanced class problem during training.The training and testing accuracies of the non-regularized CNN model are 98%&78%with an imbalanced dataset and 96%&81%with a balanced dataset,respectively.The regularized CNN model training and testing accuracies are 84%&75%for an imbalanced dataset and 87%&86%for a balanced dataset.
文摘The general computer-aided design (CAD) software cannot meet the mould design requirement of the autoclave process for composites, because many parameters such as temperature and pressure should be considered in the mould design process, in addition to the material and geometry of the part. A framed-mould computer-aided design system (FMCAD) used in the autoclave moulding process is proposed in this paper. A function model of the software is presented, in which influence factors such as part structure, mould structure, and process parameters are considered; a design model of the software is established using object oriented (O-O) technology to integrate the stiffness calculation, temperature field calculation, and deformation field calculation of mould in the design, and in the design model, a hybrid model of mould based on calculation feature and form feature is presented to support those calculations. A prototype system is developed, in which a mould design process wizard is built to integrate the input information, calculation, analysis, data storage, display, and design results of mould design. Finally, three design examples are used to verify the prototype.
文摘Diabetic Retinopathy(DR)is a significant blinding disease that poses serious threat to human vision rapidly.Classification and severity grading of DR are difficult processes to accomplish.Traditionally,it depends on ophthalmoscopically-visible symptoms of growing severity,which is then ranked in a stepwise scale from no retinopathy to various levels of DR severity.This paper presents an ensemble of Orthogonal Learning Particle Swarm Optimization(OPSO)algorithm-based Convolutional Neural Network(CNN)Model EOPSO-CNN in order to perform DR detection and grading.The proposed EOPSO-CNN model involves three main processes such as preprocessing,feature extraction,and classification.The proposed model initially involves preprocessing stage which removes the presence of noise in the input image.Then,the watershed algorithm is applied to segment the preprocessed images.Followed by,feature extraction takes place by leveraging EOPSO-CNN model.Finally,the extracted feature vectors are provided to a Decision Tree(DT)classifier to classify the DR images.The study experiments were carried out using Messidor DR Dataset and the results showed an extraordinary performance by the proposed method over compared methods in a considerable way.The simulation outcome offered the maximum classification with accuracy,sensitivity,and specificity values being 98.47%,96.43%,and 99.02%respectively.
基金National Natural Science Foundation of China(U21A20390)National Key Research and Development Program of China(2018YFC2001302)+2 种基金Development Project of Jilin Province of China(nos.20200801033GH,20200403172SF,YDZJ202101ZYTS128)Jilin Provincial Key Laboratory of Big Data Intelligent Computing(no.20180622002JC)The Fundamental Research Funds for the Central University,JLU.
文摘Osteoporotic Vertebral Fracture(OVFs)is a common lumbar spine disorder that severely affects the health of patients.With a clear bone blocks boundary,CT images have gained obvious advantages in OVFs diagnosis.Compared with CT images,X-rays are faster and more inexpensive but often leads to misdiagnosis and miss-diagnosis because of the overlapping shadows.Considering how to transfer CT imaging advantages to achieve OVFs classification in X-rays is meaningful.For this purpose,we propose a multi-modal semantic consistency network which could do well X-ray OVFs classification by transferring CT semantic consistency features.Different from existing methods,we introduce a feature-level mix-up module to get the domain soft labels which helps the network reduce the domain offsets between CT and X-ray.In the meanwhile,the network uses a self-rotation pretext task on both CT and X-ray domains to enhance learning the high-level semantic invariant features.We employ five evaluation metrics to compare the proposed method with the state-of-the-art methods.The final results show that our method improves the best value of AUC from 86.32 to 92.16%.The results indicate that multi-modal semantic consistency method could use CT imaging features to improve osteoporotic vertebral fracture classification in X-rays effectively.
基金the National Natural Science Foundation of China(No.813716234)the National Basic Research Program(973) of China(No.2010CB834302)the Shanghai Jiao Tong University Medical Engineering Cross Research Funds(Nos.YG2013MS30 and YG2011MS51)
文摘CT colonography (CTC) is a non-invasive screening technique for the detection of eolorectal polyps, as an alternative to optical colonoscopy in clinical practice. Computer-aided detection (CAD) for CTC refers to a scheme which automatically detects colorectal polyps and masses in CT images of the colon. It has the potential to increase radiologists' detection performance and greatly shorten the detection time. Over the years, technical developments have advanced CAD for CTC substantially. In this paper, key techniques used in CAD for polyp detection are reviewed. Illustrations about the performance of existing CAD schemes show their relatively high sensitivity and low false positive rate. However, these CAD schemes are still suffering from technical or clinical problems. Some existing challenges faced by CAD are also pointed out at the end of this paper.
文摘Alzheimer’s disease (AD) is a dementing disorder and one of the major public health problems in countries with greater longevity. The cerebral cortical thickness and cerebral blood flow (CBF), which are considered as morphological and functional image features, respectively, could be decreased in specific cerebral regions of patients with dementia of Alzheimer type. Therefore, the aim of this study was to develop a computer-aided classification system for AD patients based on machine learning with the morphological and functional image features derived from a magnetic resonance (MR) imaging system. The cortical thicknesses in ten cerebral regions were derived as morphological features by using gradient vector trajectories in fuzzy membership images. Functional CBF maps were measured with an arterial spin labeling technique, and ten regional CBF values were obtained by registration between the CBF map and Talairach atlas using an affine transformation and a free form deformation. We applied two systems based on an arterial neural network (ANN) and a support vector machine (SVM), which were trained with 4 morphological and 6 functional image features, to 15 AD patients and 15 clinically normal (CN) subjects for classification of AD. The area under the receiver operating characteristic curve (AUC) values for the two systems based on the ANN and SVM with both image?features were 0.901 and 0.915, respectively. The AUC values for the ANN-and SVM-based systems with the morphological features were 0.710 and 0.660, respectively, and those with the functional features were 0.878 and 0.903, respectively. Our preliminary results suggest that the proposed method may have potential for assisting radiologists in the differential diagnosis of AD patients by using morphological and functional image features.
基金This project was supported by the Deanship of Scientific Research at Prince SattamBin Abdulaziz University under research Project#(PSAU-2022/01/20287).
文摘Breast Cancer(BC)is considered the most commonly scrutinized can-cer in women worldwide,affecting one in eight women in a lifetime.Mammogra-phy screening becomes one such standard method that is helpful in identifying suspicious masses’malignancy of BC at an initial level.However,the prior iden-tification of masses in mammograms was still challenging for extremely dense and dense breast categories and needs an effective and automatic mechanisms for helping radiotherapists in diagnosis.Deep learning(DL)techniques were broadly utilized for medical imaging applications,particularly breast mass classi-fication.The advancements in the DL field paved the way for highly intellectual and self-reliant computer-aided diagnosis(CAD)systems since the learning cap-ability of Machine Learning(ML)techniques was constantly improving.This paper presents a new Hyperparameter Tuned Deep Hybrid Denoising Autoenco-der Breast Cancer Classification(HTDHDAE-BCC)on Digital Mammograms.The presented HTDHDAE-BCC model examines the mammogram images for the identification of BC.In the HTDHDAE-BCC model,the initial stage of image preprocessing is carried out using an average median filter.In addition,the deep convolutional neural network-based Inception v4 model is employed to generate feature vectors.The parameter tuning process uses the binary spider monkey opti-mization(BSMO)algorithm.The HTDHDAE-BCC model exploits chameleon swarm optimization(CSO)with the DHDAE model for BC classification.The experimental analysis of the HTDHDAE-BCC model is performed using the MIAS database.The experimental outcomes demonstrate the betterments of the HTDHDAE-BCC model over other recent approaches.
基金Princess Nourah bint Abdulrahman University Researchers Supporting Project number(PNURSP2022R151)Princess Nourah bint Abdulrahman University,Riyadh,Saudi Arabia.The authors would like to thank the Deanship of Scientific Research at Umm Al-Qura University for supporting this work by Grant Code:(22UQU4310373DSR12).
文摘With the rapid increase of new cases with an increased mortality rate,cancer is considered the second and most deadly disease globally.Breast cancer is the most widely affected cancer worldwide,with an increased death rate percentage.Due to radiologists’processing of mammogram images,many computer-aided diagnoses have been developed to detect breast cancer.Early detection of breast cancer will reduce the death rate worldwide.The early diagnosis of breast cancer using the developed computer-aided diagnosis(CAD)systems still needed to be enhanced by incorporating innovative deep learning technologies to improve the accuracy and sensitivity of the detection system with a reduced false positive rate.This paper proposed an efficient and optimized deep learning-based feature selection approach with this consideration.This model selects the relevant features from the mammogram images that can improve the accuracy of malignant detection and reduce the false alarm rate.Transfer learning is used in the extraction of features initially.Na ext,a convolution neural network,is used to extract the features.The two feature vectors are fused and optimized with enhanced Butterfly Optimization with Gaussian function(TL-CNN-EBOG)to select the final most relevant features.The optimized features are applied to the classifier called Deep belief network(DBN)to classify the benign and malignant images.The feature extraction and classification process used two datasets,breast,and MIAS.Compared to the existing methods,the optimized deep learning-based model secured 98.6%of improved accuracy on the breast dataset and 98.85%of improved accuracy on the MIAS dataset.
基金The authors extend their appreciation to the Deanship of Scientific Research at Northern Border University,Arar,KSA for funding this research work through the project number“NBU-FFR-2023-0009”.
文摘More than 500,000 patients are diagnosed with breast cancer annually.Authorities worldwide reported a death rate of 11.6%in 2018.Breast tumors are considered a fatal disease and primarily affect middle-aged women.Various approaches to identify and classify the disease using different technologies,such as deep learning and image segmentation,have been developed.Some of these methods reach 99%accuracy.However,boosting accuracy remains highly important as patients’lives depend on early diagnosis and specified treatment plans.This paper presents a fully computerized method to detect and categorize tumor masses in the breast using two deep-learning models and a classifier on different datasets.This method specifically uses ResNet50 and AlexNet,convolutional neural networks(CNNs),for deep learning and a K-Nearest-Neighbor(KNN)algorithm to classify data.Various experiments have been conducted on five datasets:the Mammographic Image Analysis Society(MIAS),Breast Cancer Histopathological Annotation and Diagnosis(BreCaHAD),King Abdulaziz University Breast Cancer Mammogram Dataset(KAU-BCMD),Breast Histopathology Images(BHI),and Breast Cancer Histopathological Image Classification(BreakHis).These datasets were used to train,validate,and test the presented method.The obtained results achieved an average of 99.38%accuracy,surpassing other models.Essential performance quantities,including precision,recall,specificity,and F-score,reached 99.71%,99.46%,98.08%,and 99.67%,respectively.These outcomes indicate that the presented method offers essential aid to pathologists diagnosing breast cancer.This study suggests using the implemented algorithm to support physicians in analyzing breast cancer correctly.
文摘Architectural distortion is an important ultrasonographic indicator of breast cancer. However, it is difficult for clinicians to determine whether a given lesion is malignant because such distortions can be subtle in ultrasonographic images. In this paper, we report on a study to develop a computerized scheme for the histological classification of masses with architectural distortions as a differential diagnosis aid. Our database consisted of 72 ultrasonographic images obtained from 47 patients whose masses had architectural distortions. This included 51 malignant (35 invasive and 16 non-invasive carcinomas) and 21 benign masses. In the proposed method, the location of the masses and the area occupied by them were first determined by an experienced clinician. Fourteen objective features concerning masses with architectural distortions were then extracted automatically by taking into account subjective features commonly used by experienced clinicians to describe such masses. The k-nearest neighbors (k-NN) rule was finally used to distinguish three histological classifications. The proposed method yielded classification accuracy values of 91.4% (32/35) for invasive carcinoma, 75.0% (12/16) for noninvasive carcinoma, and 85.7% (18/21) for benign mass, respectively. The sensitivity and specificity values were 92.2% (47/51) and 85.7% (18/21), respectively. The positive predictive values (PPV) were 88.9% (32/36) for invasive carcinoma and 85.7% (12/14) for noninvasive carcinoma whereas the negative predictive values (NPV) were 81.8% (18/22) for benign mass. Thus, the proposed method can help the differential diagnosis of masses with architectural distortions in ultrasonographic images.
文摘The proposed deep learning algorithm will be integrated as a binary classifier under the umbrella of a multi-class classification tool to facilitate the automated detection of non-healthy deformities, anatomical landmarks, pathological findings, other anomalies and normal cases, by examining medical endoscopic images of GI tract. Each binary classifier is trained to detect one specific non-healthy condition. The algorithm analyzed in the present work expands the ability of detection of this tool by classifying GI tract image snapshots into two classes, depicting haemorrhage and non-haemorrhage state. The proposed algorithm is the result of the collaboration between interdisciplinary specialists on AI and Data Analysis, Computer Vision, Gastroenterologists of four University Gastroenterology Departments of Greek Medical Schools. The data used are 195 videos (177 from non-healthy cases and 18 from healthy cases) videos captured from the PillCam<sup>(R)</sup> Medronics device, originated from 195 patients, all diagnosed with different forms of angioectasia, haemorrhages and other diseases from different sites of the gastrointestinal (GI), mainly including difficult cases of diagnosis. Our AI algorithm is based on convolutional neural network (CNN) trained on annotated images at image level, using a semantic tag indicating whether the image contains angioectasia and haemorrhage traces or not. At least 22 CNN architectures were created and evaluated some of which pre-trained applying transfer learning on ImageNet data. All the CNN variations were introduced, trained to a prevalence dataset of 50%, and evaluated of unseen data. On test data, the best results were obtained from our CNN architectures which do not utilize backbone of transfer learning. Across a balanced dataset from no-healthy images and healthy images from 39 videos from different patients, identified correct diagnosis with sensitivity 90%, specificity 92%, precision 91.8%, FPR 8%, FNR 10%. Besides, we compared the performance of our best CNN algorithm versus our same goal algorithm based on HSV colorimetric lesions features extracted of pixel-level annotations, both algorithms trained and tested on the same data. It is evaluated that the CNN trained on image level annotated images, is 9% less sensitive, achieves 2.6% less precision, 1.2% less FPR, and 7% less FNR, than that based on HSV filters, extracted from on pixel-level annotated training data.
文摘The novel coronavirus disease,or COVID-19,is a hazardous disease.It is endangering the lives of many people living in more than two hundred countries.It directly affects the lungs.In general,two main imaging modalities,i.e.,computed tomography(CT)and chest x-ray(CXR)are used to achieve a speedy and reliable medical diagnosis.Identifying the coronavirus in medical images is exceedingly difficult for diagnosis,assessment,and treatment.It is demanding,time-consuming,and subject to human mistakes.In biological disciplines,excellent performance can be achieved by employing artificial intelligence(AI)models.As a subfield of AI,deep learning(DL)networks have drawn considerable attention than standard machine learning(ML)methods.DL models automatically carry out all the steps of feature extraction,feature selection,and classification.This study has performed comprehensive analysis of coronavirus classification using CXR and CT imaging modalities using DL architectures.Additionally,we have discussed how transfer learning is helpful in this regard.Finally,the problem of designing and implementing a system using computer-aided diagnostic(CAD)to find COVID-19 using DL approaches highlighted a future research possibility.
基金Supported by the National Key Research and Development Program of China(2024YFB3311703)National Natural Science Foundation of China(61932003)Beijing Science and Technology Plan Project(Z221100006322003).
文摘In this paper,we introduce an innovative method for computer-aided design(CAD)segmentation by concatenating meshes and CAD models.Many previous CAD segmentation methods have achieved impressive performance using single representations,such as meshes,CAD,and point clouds.However,existing methods cannot effectively combine different three-dimensional model types for the direct conversion,alignment,and integrity maintenance of geometric and topological information.Hence,we propose an integration approach that combines the geometric accuracy of CAD data with the flexibility of mesh representations,as well as introduce a unique hybrid representation that combines CAD and mesh models to enhance segmentation accuracy.To combine these two model types,our hybrid system utilizes advanced-neural-network techniques to convert CAD models into mesh models.For complex CAD models,model segmentation is crucial for model retrieval and reuse.In partial retrieval,it aims to segment a complex CAD model into several simple components.The first component of our hybrid system involves advanced mesh-labeling algorithms that harness the digitization of CAD properties to mesh models.The second component integrates labelled face features for CAD segmentation by leveraging the abundant multisemantic information embedded in CAD models.This combination of mesh and CAD not only refines the accuracy of boundary delineation but also provides a comprehensive understanding of the underlying object semantics.This study uses the Fusion 360 Gallery dataset.Experimental results indicate that our hybrid method can segment these models with higher accuracy than other methods that use single representations.