In order to fast transmission and processing of medical images and do not need to install client and plug-ins, the paper designed a kind of medical image reading system based on BS structure. This system improved the ...In order to fast transmission and processing of medical images and do not need to install client and plug-ins, the paper designed a kind of medical image reading system based on BS structure. This system improved the existing IWEB in the framework of PACS client image processing, medical image based on the service WEB completion port model. To realize the fast loading images with high concurrency, compared with the traditional WEB PACS, this system has the advantages of no client without plug-in installation, at the same time in the transmission and processing performance image has been greatly improved.展开更多
Osteosarcomas are malignant neoplasms derived from undifferentiated osteogenic mesenchymal cells. It causes severe and permanent damage to human tissue and has a high mortality rate. The condition has the capacity to ...Osteosarcomas are malignant neoplasms derived from undifferentiated osteogenic mesenchymal cells. It causes severe and permanent damage to human tissue and has a high mortality rate. The condition has the capacity to occur in any bone;however, it often impacts long bones like the arms and legs. Prompt identification and prompt intervention are essential for augmenting patient longevity. However, the intricate composition and erratic placement of osteosarcoma provide difficulties for clinicians in accurately determining the scope of the afflicted area. There is a pressing requirement for developing an algorithm that can automatically detect bone tumors with tremendous accuracy. Therefore, in this study, we proposed a novel feature extractor framework associated with a supervised three-class XGBoost algorithm for the detection of osteosarcoma in whole slide histopathology images. This method allows for quicker and more effective data analysis. The first step involves preprocessing the imbalanced histopathology dataset, followed by augmentation and balancing utilizing two techniques: SMOTE and ADASYN. Next, a unique feature extraction framework is used to extract features, which are then inputted into the supervised three-class XGBoost algorithm for classification into three categories: non-tumor, viable tumor, and non-viable tumor. The experimental findings indicate that the proposed model exhibits superior efficiency, accuracy, and a more lightweight design in comparison to other current models for osteosarcoma detection.展开更多
The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ...The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.展开更多
Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerabil...Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerability.Therefore,classifying plaque risk constitutes one of themost critical tasks in the clinicalmanagement of this condition.While classification models derived from individual medical centers have been extensively investigated,these singlecenter models often fail to generalize well to multi-center data due to variations in ultrasound images caused by differences in physician expertise and equipment.To address this limitation,a Dual-Classifier Label Correction Networkmodel(DCLCN)is proposed for the classification of carotid plaque ultrasound images acrossmultiplemedical centers.TheDCLCNdesigns amulti-center domain adaptationmodule that leverages a dual-classifier strategy to extract knowledge from both source and target centers,thereby reducing feature discrepancies through a domain adaptation layer.Additionally,to mitigate the impact of image noise,a label modeling and correction module is introduced to generate pseudo-labels for the target centers and iteratively refine them using an end-to-end correction mechanism.Experiments on the carotid plaque dataset collected fromthreemedical centers demonstrate that the DCLCN achieves commendable performance and robustness.展开更多
Medical image processing becomes a hot research topic in healthcare sector for effective decision making and diagnoses of diseases.Magnetic resonance imaging(MRI)is a widely utilized tool for the classification and de...Medical image processing becomes a hot research topic in healthcare sector for effective decision making and diagnoses of diseases.Magnetic resonance imaging(MRI)is a widely utilized tool for the classification and detection of prostate cancer.Since the manual screening process of prostate cancer is difficult,automated diagnostic methods become essential.This study develops a novel Deep Learning based Prostate Cancer Classification(DTL-PSCC)model using MRI images.The presented DTL-PSCC technique encompasses EfficientNet based feature extractor for the generation of a set of feature vectors.In addition,the fuzzy k-nearest neighbour(FKNN)model is utilized for classification process where the class labels are allotted to the input MRI images.Moreover,the membership value of the FKNN model can be optimally tuned by the use of krill herd algorithm(KHA)which results in improved classification performance.In order to demonstrate the good classification outcome of the DTL-PSCC technique,a wide range of simulations take place on benchmark MRI datasets.The extensive comparative results ensured the betterment of the DTL-PSCC technique over the recent methods with the maximum accuracy of 85.09%.展开更多
In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intel...In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods.展开更多
Medical image compression is one of the essential technologies to facilitate real-time medical data transmission in remote healthcare applications.In general,image compression can introduce undesired coding artifacts,...Medical image compression is one of the essential technologies to facilitate real-time medical data transmission in remote healthcare applications.In general,image compression can introduce undesired coding artifacts,such as blocking artifacts and ringing effects.In this paper,we proposed a Multi-Scale Feature Attention Network(MSFAN)with two essential parts,which are multi-scale feature extraction layers and feature attention layers to efficiently remove coding artifacts of compressed medical images.Multiscale feature extraction layers have four Feature Extraction(FE)blocks.Each FE block consists of five convolution layers and one CA block for weighted skip connection.In order to optimize the proposed network architectures,a variety of verification tests were conducted using validation dataset.We used Computer Vision Center-Clinic Database(CVC-ClinicDB)consisting of 612 colonoscopy medical images to evaluate the enhancement of image restoration.The proposedMSFAN can achieve improved PSNR gains as high as 0.25 and 0.24 dB on average compared to DnCNNand DCSC,respectively.展开更多
Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Tr...Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Transform(DWT)with the energy compaction of the Discrete Wavelet Transform(DCT).The multi-level Encryption-based Hybrid Fusion Technique(EbhFT)aims to achieve great advances in terms of imperceptibility and security of medical images.A DWT disintegrated sub-band of a cover image is reformed simultaneously using the DCT transform.Afterwards,a 64-bit hex key is employed to encrypt the host image as well as participate in the second key creation process to encode the watermark.Lastly,a PN-sequence key is formed along with a supplementary key in the third layer of the EbHFT.Thus,the watermarked image is generated by enclosing both keys into DWT and DCT coefficients.The fusions ability of the proposed EbHFT technique makes the best use of the distinct privileges of using both DWT and DCT methods.In order to validate the proposed technique,a standard dataset of medical images is used.Simulation results show higher performance of the visual quality(i.e.,57.65)for the watermarked forms of all types of medical images.In addition,EbHFT robustness outperforms an existing scheme tested for the same dataset in terms of Normalized Correlation(NC).Finally,extra protection for digital images from against illegal replicating and unapproved tampering using the proposed technique.展开更多
Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irreg...Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance.展开更多
Wound classification is a critical task in healthcare,requiring accurate and efficient diagnostic tools to support clinicians.In this paper,we investigated the effectiveness of the YOLO11n model in classifying differe...Wound classification is a critical task in healthcare,requiring accurate and efficient diagnostic tools to support clinicians.In this paper,we investigated the effectiveness of the YOLO11n model in classifying different types of wound images.This study presents the training and evaluation of a lightweight YOLO11n model for automated wound classification using the AZH dataset,which includes six wound classes:Background(BG),Normal Skin(N),Diabetic(D),Pressure(P),Surgical(S),and Venous(V).The model’s architecture,optimized through experiments with varying batch sizes and epochs,ensures efficient deployment in resource-constrained environments.The model’s architecture is discussed in detail.The visual representation of different blocks of the model is also presented.The visual results of training and validation are shown.Our experiments emphasize the model’s ability to classify wounds with high precision and recall,leveraging its lightweight architecture for efficient computation.The findings demonstrate that fine-tuning hyperparameters has a significant impact on the model’s detection performance,making it suitable for real-world medical applications.This research contributes to advancing automated wound classification through deep learning,while addressing challenges such as dataset imbalance and classification intricacies.We conducted a comprehensive evaluation of YOLO11n for wound classification across multiple configurations,including 6,5,4,and 3-way classification,using the AZH dataset.YOLO11n acquires the highest F1 score and mean Average Precision of 0.836 and 0.893 for classifying wounds into six classes,respectively.It outperforms the existing methods in classifying wounds using the AZH dataset.Moreover,Gradient-weighted Class Activation Mapping(Grad-CAM)is applied to the YOLO11n model to visualize class-relevant regions in wound images.展开更多
Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affec...Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affecting millions of people worldwide due to their widespread occurrence.Fundus photography generates machine-based eye images that assist in diagnosing and treating ocular diseases such as diabetic retinopathy.As a result,accurate fundus detection is essential for early diagnosis and effective treatment,helping to prevent severe complications and improve patient outcomes.To address this need,this article introduces a Derivative Model for Fundus Detection using Deep NeuralNetworks(DMFD-DNN)to enhance diagnostic precision.Thismethod selects key features for fundus detection using the least derivative,which identifies features correlating with stored fundus images.Feature filtering relies on the minimum derivative,determined by extracting both similar and varying textures.In this research,the DNN model was integrated with the derivative model.Fundus images were segmented,features were extracted,and the DNN was iteratively trained to identify fundus regions reliably.The goal was to improve the precision of fundoscopic diagnosis by training the DNN incrementally,taking into account the least possible derivative across iterations,and using outputs from previous cycles.The hidden layer of the neural network operates on the most significant derivative,which may reduce precision across iterations.These derivatives are treated as inaccurate,and the model is subsequently trained using selective features and their corresponding extractions.The proposed model outperforms previous techniques in detecting fundus regions,achieving 94.98%accuracy and 91.57%sensitivity,with a minimal error rate of 5.43%.It significantly reduces feature extraction time to 1.462 s and minimizes computational overhead,thereby improving operational efficiency and scalability.Ultimately,the proposed model enhances diagnostic precision and reduces errors,leading to more effective fundus dysfunction diagnosis and treatment.展开更多
Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory class...Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory classification results on mammographic images because these models are not specifically designed for mammographic images and do not take the specific traits of these images into account.To exploit the essential discriminant information of mammographic images,we propose a novel classification method based on a convolutional neural network.Specifically,the proposed method designs two branches to extract the discriminative features from mammographic images from the mediolateral oblique and craniocaudal(CC)mammographic views.The features extracted from the two-view mammographic images contain complementary information that enables breast cancer to be more easily distinguished.Moreover,the attention block is introduced to capture the channel-wise information by adjusting the weight of each feature map,which is beneficial to emphasising the important features of mammographic images.Furthermore,we add a penalty term based on the fuzzy cluster algorithm to the cross-entropy function,which improves the generalisation ability of the classification model by maximising the interclass distance and minimising the intraclass distance of the samples.The experimental results on The Digital database for Screening Mammography INbreast and MIAS mammography databases illustrate that the proposed method achieves the best classification performance and is more robust than the compared state-ofthe-art classification methods.展开更多
Eye health has become a global health concern and attracted broad attention.Over the years,researchers have proposed many state-of-the-art convolutional neural networks(CNNs)to assist ophthalmologists in diagnosing oc...Eye health has become a global health concern and attracted broad attention.Over the years,researchers have proposed many state-of-the-art convolutional neural networks(CNNs)to assist ophthalmologists in diagnosing ocular diseases efficiently and precisely.However,most existing methods were dedicated to constructing sophisticated CNNs,inevitably ignoring the trade-off between performance and model complexity.To alleviate this paradox,this paper proposes a lightweight yet efficient network architecture,mixeddecomposed convolutional network(MDNet),to recognise ocular diseases.In MDNet,we introduce a novel mixed-decomposed depthwise convolution method,which takes advantage of depthwise convolution and depthwise dilated convolution operations to capture low-resolution and high-resolution patterns by using fewer computations and fewer parameters.We conduct extensive experiments on the clinical anterior segment optical coherence tomography(AS-OCT),LAG,University of California San Diego,and CIFAR-100 datasets.The results show our MDNet achieves a better trade-off between the performance and model complexity than efficient CNNs including MobileNets and MixNets.Specifically,our MDNet outperforms MobileNets by 2.5%of accuracy by using 22%fewer parameters and 30%fewer computations on the AS-OCT dataset.展开更多
Retinal vessel segmentation is a challenging medical task owing to small size of dataset,micro blood vessels and low image contrast.To address these issues,we introduce a novel convolutional neural network in this pap...Retinal vessel segmentation is a challenging medical task owing to small size of dataset,micro blood vessels and low image contrast.To address these issues,we introduce a novel convolutional neural network in this paper,which takes the advantage of both adversarial learning and recurrent neural network.An iterative design of network with recurrent unit is performed to refine the segmentation results from input retinal image gradually.Recurrent unit preserves high-level semantic information for feature reuse,so as to output a sufficiently refined segmentation map instead of a coarse mask.Moreover,an adversarial loss is imposing the integrity and connectivity constraints on the segmented vessel regions,thus greatly reducing topology errors of segmentation.The experimental results on the DRIVE dataset show that our method achieves area under curve and sensitivity of 98.17%and 80.64%,respectively.Our method achieves superior performance in retinal vessel segmentation compared with other existing state-of-the-art methods.展开更多
Diabetic retinopathy(DR),the main cause of irreversible blindness,is one of the most common complications of diabetes.At present,deep convolutional neural networks have achieved promising performance in automatic DR d...Diabetic retinopathy(DR),the main cause of irreversible blindness,is one of the most common complications of diabetes.At present,deep convolutional neural networks have achieved promising performance in automatic DR detection tasks.The convolution operation of methods is a local cross-correlation operation,whose receptive field de-termines the size of the local neighbourhood for processing.However,for retinal fundus photographs,there is not only the local information but also long-distance dependence between the lesion features(e.g.hemorrhages and exudates)scattered throughout the whole image.The proposed method incorporates correlations between long-range patches into the deep learning framework to improve DR detection.Patch-wise re-lationships are used to enhance the local patch features since lesions of DR usually appear as plaques.The Long-Range unit in the proposed network with a residual structure can be flexibly embedded into other trained networks.Extensive experimental results demon-strate that the proposed approach can achieve higher accuracy than existing state-of-the-art models on Messidor and EyePACS datasets.展开更多
Diagnosing individuals with autism spectrum disorder(ASD)accurately faces great chal-lenges in clinical practice,primarily due to the data's high heterogeneity and limited sample size.To tackle this issue,the auth...Diagnosing individuals with autism spectrum disorder(ASD)accurately faces great chal-lenges in clinical practice,primarily due to the data's high heterogeneity and limited sample size.To tackle this issue,the authors constructed a deep graph convolutional network(GCN)based on variable multi‐graph and multimodal data(VMM‐DGCN)for ASD diagnosis.Firstly,the functional connectivity matrix was constructed to extract primary features.Then,the authors constructed a variable multi‐graph construction strategy to capture the multi‐scale feature representations of each subject by utilising convolutional filters with varying kernel sizes.Furthermore,the authors brought the non‐imaging in-formation into the feature representation at each scale and constructed multiple population graphs based on multimodal data by fully considering the correlation between subjects.After extracting the deeper features of population graphs using the deep GCN(DeepGCN),the authors fused the node features of multiple subgraphs to perform node classification tasks for typical control and ASD patients.The proposed algorithm was evaluated on the Autism Brain Imaging Data Exchange I(ABIDE I)dataset,achieving an accuracy of 91.62%and an area under the curve value of 95.74%.These results demon-strated its outstanding performance compared to other ASD diagnostic algorithms.展开更多
Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR...Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.展开更多
This paper addresses the common orthopedic trauma of spinal vertebral fractures and aims to enhance doctors’diagnostic efficiency.Therefore,a deep-learning-based automated diagnostic systemwithmulti-label segmentatio...This paper addresses the common orthopedic trauma of spinal vertebral fractures and aims to enhance doctors’diagnostic efficiency.Therefore,a deep-learning-based automated diagnostic systemwithmulti-label segmentation is proposed to recognize the condition of vertebral fractures.The whole spine Computed Tomography(CT)image is segmented into the fracture,normal,and background using U-Net,and the fracture degree of each vertebra is evaluated(Genant semi-qualitative evaluation).The main work of this paper includes:First,based on the spatial configuration network(SCN)structure,U-Net is used instead of the SCN feature extraction network.The attention mechanismandthe residual connectionbetweenthe convolutional layers are added in the local network(LN)stage.Multiple filtering is added in the global network(GN)stage,and each layer of the LN decoder feature map is filtered separately using dot product,and the filtered features are re-convolved to obtain the GN output heatmap.Second,a network model with improved SCN(M-SCN)helps automatically localize the center-of-mass position of each vertebra,and the voxels around each localized vertebra were clipped,eliminating a large amount of redundant information(e.g.,background and other interfering vertebrae)and keeping the vertebrae to be segmented in the center of the image.Multilabel segmentation of the clipped portion was subsequently performed using U-Net.This paper uses VerSe’19,VerSe’20(using only data containing vertebral fractures),and private data(provided by Guizhou Orthopedic Hospital)for model training and evaluation.Compared with the original SCN network,the M-SCN reduced the prediction error rate by 1.09%and demonstrated the effectiveness of the improvement in ablation experiments.In the vertebral segmentation experiment,the Dice Similarity Coefficient(DSC)index reached 93.50%and the Maximum Symmetry Surface Distance(MSSD)index was 4.962 mm,with accuracy and recall of 95.82%and 91.73%,respectively.Fractured vertebrae were also marked as red and normal vertebrae were marked as white in the experiment,and the semi-qualitative assessment results of Genant were provided,as well as the results of spinal localization visualization and 3D reconstructed views of the spine to analyze the actual predictive ability of the model.It provides a promising tool for vertebral fracture detection.展开更多
Lightweight deep convolutional neural networks(CNNs)present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients.Recently,advantages of portable Ultrasound(US)imaging su...Lightweight deep convolutional neural networks(CNNs)present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients.Recently,advantages of portable Ultrasound(US)imaging such as simplicity and safe procedures have attracted many radiologists for scanning suspected COVID-19 cases.In this paper,a new framework of lightweight deep learning classifiers,namely COVID-LWNet is proposed to identify COVID-19 and pneumonia abnormalities in US images.Compared to traditional deep learning models,lightweight CNNs showed significant performance of real-time vision applications by using mobile devices with limited hardware resources.Four main lightweight deep learning models,namely MobileNets,ShuffleNets,MENet and MnasNet have been proposed to identify the health status of lungs using US images.Public image dataset(POCUS)was used to validate our proposed COVID-LWNet framework successfully.Three classes of infectious COVID-19,bacterial pneumonia,and the healthy lung were investigated in this study.The results showed that the performance of our proposed MnasNet classifier achieved the best accuracy score and shortest training time of 99.0%and 647.0 s,respectively.This paper demonstrates the feasibility of using our proposed COVID-LWNet framework as a new mobilebased radiological tool for clinical diagnosis of COVID-19 and other lung diseases.展开更多
One of the fast-growing disease affecting women’s health seriously is breast cancer.It is highly essential to identify and detect breast cancer in the earlier stage.This paper used a novel advanced methodology than m...One of the fast-growing disease affecting women’s health seriously is breast cancer.It is highly essential to identify and detect breast cancer in the earlier stage.This paper used a novel advanced methodology than machine learning algorithms such as Deep learning algorithms to classify breast cancer accurately.Deep learning algorithms are fully automatic in learning,extracting,and classifying the features and are highly suitable for any image,from natural to medical images.Existing methods focused on using various conventional and machine learning methods for processing natural and medical images.It is inadequate for the image where the coarse structure matters most.Most of the input images are downscaled,where it is impossible to fetch all the hidden details to reach accuracy in classification.Whereas deep learning algorithms are high efficiency,fully automatic,have more learning capability using more hidden layers,fetch as much as possible hidden information from the input images,and provide an accurate prediction.Hence this paper uses AlexNet from a deep convolution neural network for classifying breast cancer in mammogram images.The performance of the proposed convolution network structure is evaluated by comparing it with the existing algorithms.展开更多
文摘In order to fast transmission and processing of medical images and do not need to install client and plug-ins, the paper designed a kind of medical image reading system based on BS structure. This system improved the existing IWEB in the framework of PACS client image processing, medical image based on the service WEB completion port model. To realize the fast loading images with high concurrency, compared with the traditional WEB PACS, this system has the advantages of no client without plug-in installation, at the same time in the transmission and processing performance image has been greatly improved.
文摘Osteosarcomas are malignant neoplasms derived from undifferentiated osteogenic mesenchymal cells. It causes severe and permanent damage to human tissue and has a high mortality rate. The condition has the capacity to occur in any bone;however, it often impacts long bones like the arms and legs. Prompt identification and prompt intervention are essential for augmenting patient longevity. However, the intricate composition and erratic placement of osteosarcoma provide difficulties for clinicians in accurately determining the scope of the afflicted area. There is a pressing requirement for developing an algorithm that can automatically detect bone tumors with tremendous accuracy. Therefore, in this study, we proposed a novel feature extractor framework associated with a supervised three-class XGBoost algorithm for the detection of osteosarcoma in whole slide histopathology images. This method allows for quicker and more effective data analysis. The first step involves preprocessing the imbalanced histopathology dataset, followed by augmentation and balancing utilizing two techniques: SMOTE and ADASYN. Next, a unique feature extraction framework is used to extract features, which are then inputted into the supervised three-class XGBoost algorithm for classification into three categories: non-tumor, viable tumor, and non-viable tumor. The experimental findings indicate that the proposed model exhibits superior efficiency, accuracy, and a more lightweight design in comparison to other current models for osteosarcoma detection.
文摘The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.
基金supported by Shanghai Technical Service Computing Center of Science and Engineering,Shanghai University.
文摘Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerability.Therefore,classifying plaque risk constitutes one of themost critical tasks in the clinicalmanagement of this condition.While classification models derived from individual medical centers have been extensively investigated,these singlecenter models often fail to generalize well to multi-center data due to variations in ultrasound images caused by differences in physician expertise and equipment.To address this limitation,a Dual-Classifier Label Correction Networkmodel(DCLCN)is proposed for the classification of carotid plaque ultrasound images acrossmultiplemedical centers.TheDCLCNdesigns amulti-center domain adaptationmodule that leverages a dual-classifier strategy to extract knowledge from both source and target centers,thereby reducing feature discrepancies through a domain adaptation layer.Additionally,to mitigate the impact of image noise,a label modeling and correction module is introduced to generate pseudo-labels for the target centers and iteratively refine them using an end-to-end correction mechanism.Experiments on the carotid plaque dataset collected fromthreemedical centers demonstrate that the DCLCN achieves commendable performance and robustness.
基金The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(RGP 2/25/43)Taif University Researchers Supporting Project Number(TURSP-2020/346),Taif University,Taif,Saudi Arabia.
文摘Medical image processing becomes a hot research topic in healthcare sector for effective decision making and diagnoses of diseases.Magnetic resonance imaging(MRI)is a widely utilized tool for the classification and detection of prostate cancer.Since the manual screening process of prostate cancer is difficult,automated diagnostic methods become essential.This study develops a novel Deep Learning based Prostate Cancer Classification(DTL-PSCC)model using MRI images.The presented DTL-PSCC technique encompasses EfficientNet based feature extractor for the generation of a set of feature vectors.In addition,the fuzzy k-nearest neighbour(FKNN)model is utilized for classification process where the class labels are allotted to the input MRI images.Moreover,the membership value of the FKNN model can be optimally tuned by the use of krill herd algorithm(KHA)which results in improved classification performance.In order to demonstrate the good classification outcome of the DTL-PSCC technique,a wide range of simulations take place on benchmark MRI datasets.The extensive comparative results ensured the betterment of the DTL-PSCC technique over the recent methods with the maximum accuracy of 85.09%.
基金supported by National Natural Science Foundation of China(NSFC)(61976123,62072213)Taishan Young Scholars Program of Shandong Provinceand Key Development Program for Basic Research of Shandong Province(ZR2020ZD44).
文摘In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods.
基金This work was supported by Kyungnam University Foundation Grant,2020.
文摘Medical image compression is one of the essential technologies to facilitate real-time medical data transmission in remote healthcare applications.In general,image compression can introduce undesired coding artifacts,such as blocking artifacts and ringing effects.In this paper,we proposed a Multi-Scale Feature Attention Network(MSFAN)with two essential parts,which are multi-scale feature extraction layers and feature attention layers to efficiently remove coding artifacts of compressed medical images.Multiscale feature extraction layers have four Feature Extraction(FE)blocks.Each FE block consists of five convolution layers and one CA block for weighted skip connection.In order to optimize the proposed network architectures,a variety of verification tests were conducted using validation dataset.We used Computer Vision Center-Clinic Database(CVC-ClinicDB)consisting of 612 colonoscopy medical images to evaluate the enhancement of image restoration.The proposedMSFAN can achieve improved PSNR gains as high as 0.25 and 0.24 dB on average compared to DnCNNand DCSC,respectively.
文摘Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Transform(DWT)with the energy compaction of the Discrete Wavelet Transform(DCT).The multi-level Encryption-based Hybrid Fusion Technique(EbhFT)aims to achieve great advances in terms of imperceptibility and security of medical images.A DWT disintegrated sub-band of a cover image is reformed simultaneously using the DCT transform.Afterwards,a 64-bit hex key is employed to encrypt the host image as well as participate in the second key creation process to encode the watermark.Lastly,a PN-sequence key is formed along with a supplementary key in the third layer of the EbHFT.Thus,the watermarked image is generated by enclosing both keys into DWT and DCT coefficients.The fusions ability of the proposed EbHFT technique makes the best use of the distinct privileges of using both DWT and DCT methods.In order to validate the proposed technique,a standard dataset of medical images is used.Simulation results show higher performance of the visual quality(i.e.,57.65)for the watermarked forms of all types of medical images.In addition,EbHFT robustness outperforms an existing scheme tested for the same dataset in terms of Normalized Correlation(NC).Finally,extra protection for digital images from against illegal replicating and unapproved tampering using the proposed technique.
基金National Natural Science Foundation of China,Grant/Award Numbers:62377026,62201222Knowledge Innovation Program of Wuhan-Shuguang Project,Grant/Award Number:2023010201020382+1 种基金National Key Research and Development Programme of China,Grant/Award Number:2022YFD1700204Fundamental Research Funds for the Central Universities,Grant/Award Numbers:CCNU22QN014,CCNU22JC007,CCNU22XJ034.
文摘Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance.
基金funded by the Deanship of Graduate Studies and Scientific Research,Jazan University,Saudi Arabia,through project number:(RG24-S0150).
文摘Wound classification is a critical task in healthcare,requiring accurate and efficient diagnostic tools to support clinicians.In this paper,we investigated the effectiveness of the YOLO11n model in classifying different types of wound images.This study presents the training and evaluation of a lightweight YOLO11n model for automated wound classification using the AZH dataset,which includes six wound classes:Background(BG),Normal Skin(N),Diabetic(D),Pressure(P),Surgical(S),and Venous(V).The model’s architecture,optimized through experiments with varying batch sizes and epochs,ensures efficient deployment in resource-constrained environments.The model’s architecture is discussed in detail.The visual representation of different blocks of the model is also presented.The visual results of training and validation are shown.Our experiments emphasize the model’s ability to classify wounds with high precision and recall,leveraging its lightweight architecture for efficient computation.The findings demonstrate that fine-tuning hyperparameters has a significant impact on the model’s detection performance,making it suitable for real-world medical applications.This research contributes to advancing automated wound classification through deep learning,while addressing challenges such as dataset imbalance and classification intricacies.We conducted a comprehensive evaluation of YOLO11n for wound classification across multiple configurations,including 6,5,4,and 3-way classification,using the AZH dataset.YOLO11n acquires the highest F1 score and mean Average Precision of 0.836 and 0.893 for classifying wounds into six classes,respectively.It outperforms the existing methods in classifying wounds using the AZH dataset.Moreover,Gradient-weighted Class Activation Mapping(Grad-CAM)is applied to the YOLO11n model to visualize class-relevant regions in wound images.
基金supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(No.2021R1F1A1055408)supported by the Researchers Supporting Project Number(MHIRSP2024005)Almaarefa University,Riyadh,Saudi Arabia.
文摘Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affecting millions of people worldwide due to their widespread occurrence.Fundus photography generates machine-based eye images that assist in diagnosing and treating ocular diseases such as diabetic retinopathy.As a result,accurate fundus detection is essential for early diagnosis and effective treatment,helping to prevent severe complications and improve patient outcomes.To address this need,this article introduces a Derivative Model for Fundus Detection using Deep NeuralNetworks(DMFD-DNN)to enhance diagnostic precision.Thismethod selects key features for fundus detection using the least derivative,which identifies features correlating with stored fundus images.Feature filtering relies on the minimum derivative,determined by extracting both similar and varying textures.In this research,the DNN model was integrated with the derivative model.Fundus images were segmented,features were extracted,and the DNN was iteratively trained to identify fundus regions reliably.The goal was to improve the precision of fundoscopic diagnosis by training the DNN incrementally,taking into account the least possible derivative across iterations,and using outputs from previous cycles.The hidden layer of the neural network operates on the most significant derivative,which may reduce precision across iterations.These derivatives are treated as inaccurate,and the model is subsequently trained using selective features and their corresponding extractions.The proposed model outperforms previous techniques in detecting fundus regions,achieving 94.98%accuracy and 91.57%sensitivity,with a minimal error rate of 5.43%.It significantly reduces feature extraction time to 1.462 s and minimizes computational overhead,thereby improving operational efficiency and scalability.Ultimately,the proposed model enhances diagnostic precision and reduces errors,leading to more effective fundus dysfunction diagnosis and treatment.
基金Guangdong Basic and Applied Basic Research Foundation,Grant/Award Number:2019A1515110582Shenzhen Key Laboratory of Visual Object Detection and Recognition,Grant/Award Number:ZDSYS20190902093015527National Natural Science Foundation of China,Grant/Award Number:61876051。
文摘Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory classification results on mammographic images because these models are not specifically designed for mammographic images and do not take the specific traits of these images into account.To exploit the essential discriminant information of mammographic images,we propose a novel classification method based on a convolutional neural network.Specifically,the proposed method designs two branches to extract the discriminative features from mammographic images from the mediolateral oblique and craniocaudal(CC)mammographic views.The features extracted from the two-view mammographic images contain complementary information that enables breast cancer to be more easily distinguished.Moreover,the attention block is introduced to capture the channel-wise information by adjusting the weight of each feature map,which is beneficial to emphasising the important features of mammographic images.Furthermore,we add a penalty term based on the fuzzy cluster algorithm to the cross-entropy function,which improves the generalisation ability of the classification model by maximising the interclass distance and minimising the intraclass distance of the samples.The experimental results on The Digital database for Screening Mammography INbreast and MIAS mammography databases illustrate that the proposed method achieves the best classification performance and is more robust than the compared state-ofthe-art classification methods.
基金Stable Support Plan Program,Grant/Award Number:20200925174052004Shenzhen Natural Science Fund,Grant/Award Number:JCYJ20200109140820699+2 种基金National Natural Science Foundation of China,Grant/Award Number:82272086Guangdong Provincial Department of Education,Grant/Award Numbers:2020ZDZX3043,SJZLGC202202Guangdong Provincial Key Laboratory,Grant/Award Number:2020B121201001。
文摘Eye health has become a global health concern and attracted broad attention.Over the years,researchers have proposed many state-of-the-art convolutional neural networks(CNNs)to assist ophthalmologists in diagnosing ocular diseases efficiently and precisely.However,most existing methods were dedicated to constructing sophisticated CNNs,inevitably ignoring the trade-off between performance and model complexity.To alleviate this paradox,this paper proposes a lightweight yet efficient network architecture,mixeddecomposed convolutional network(MDNet),to recognise ocular diseases.In MDNet,we introduce a novel mixed-decomposed depthwise convolution method,which takes advantage of depthwise convolution and depthwise dilated convolution operations to capture low-resolution and high-resolution patterns by using fewer computations and fewer parameters.We conduct extensive experiments on the clinical anterior segment optical coherence tomography(AS-OCT),LAG,University of California San Diego,and CIFAR-100 datasets.The results show our MDNet achieves a better trade-off between the performance and model complexity than efficient CNNs including MobileNets and MixNets.Specifically,our MDNet outperforms MobileNets by 2.5%of accuracy by using 22%fewer parameters and 30%fewer computations on the AS-OCT dataset.
文摘Retinal vessel segmentation is a challenging medical task owing to small size of dataset,micro blood vessels and low image contrast.To address these issues,we introduce a novel convolutional neural network in this paper,which takes the advantage of both adversarial learning and recurrent neural network.An iterative design of network with recurrent unit is performed to refine the segmentation results from input retinal image gradually.Recurrent unit preserves high-level semantic information for feature reuse,so as to output a sufficiently refined segmentation map instead of a coarse mask.Moreover,an adversarial loss is imposing the integrity and connectivity constraints on the segmented vessel regions,thus greatly reducing topology errors of segmentation.The experimental results on the DRIVE dataset show that our method achieves area under curve and sensitivity of 98.17%and 80.64%,respectively.Our method achieves superior performance in retinal vessel segmentation compared with other existing state-of-the-art methods.
基金National Natural Science Foundation of China,Grant/Award Numbers:62001141,62272319Science,Technology and Innovation Commission of Shenzhen Municipality,Grant/Award Numbers:GJHZ20210705141812038,JCYJ20210324094413037,JCYJ20210324131800002,RCBS20210609103820029Stable Support Projects for Shenzhen Higher Education Institutions,Grant/Award Number:20220715183602001。
文摘Diabetic retinopathy(DR),the main cause of irreversible blindness,is one of the most common complications of diabetes.At present,deep convolutional neural networks have achieved promising performance in automatic DR detection tasks.The convolution operation of methods is a local cross-correlation operation,whose receptive field de-termines the size of the local neighbourhood for processing.However,for retinal fundus photographs,there is not only the local information but also long-distance dependence between the lesion features(e.g.hemorrhages and exudates)scattered throughout the whole image.The proposed method incorporates correlations between long-range patches into the deep learning framework to improve DR detection.Patch-wise re-lationships are used to enhance the local patch features since lesions of DR usually appear as plaques.The Long-Range unit in the proposed network with a residual structure can be flexibly embedded into other trained networks.Extensive experimental results demon-strate that the proposed approach can achieve higher accuracy than existing state-of-the-art models on Messidor and EyePACS datasets.
基金National Natural Science Foundation of China,Grant/Award Number:62172139Science Research Project of Hebei Province,Grant/Award Number:CXY2024031+3 种基金Natural Science Foundation of Hebei Province,Grant/Award Number:F2022201055Project Funded by China Postdoctoral,Grant/Award Number:2022M713361Natural Science Interdisciplinary Research Program of Hebei University,Grant/Award Number:DXK202102Open Project Program of the National Laboratory of Pattern Recognition,Grant/Award Number:202200007。
文摘Diagnosing individuals with autism spectrum disorder(ASD)accurately faces great chal-lenges in clinical practice,primarily due to the data's high heterogeneity and limited sample size.To tackle this issue,the authors constructed a deep graph convolutional network(GCN)based on variable multi‐graph and multimodal data(VMM‐DGCN)for ASD diagnosis.Firstly,the functional connectivity matrix was constructed to extract primary features.Then,the authors constructed a variable multi‐graph construction strategy to capture the multi‐scale feature representations of each subject by utilising convolutional filters with varying kernel sizes.Furthermore,the authors brought the non‐imaging in-formation into the feature representation at each scale and constructed multiple population graphs based on multimodal data by fully considering the correlation between subjects.After extracting the deeper features of population graphs using the deep GCN(DeepGCN),the authors fused the node features of multiple subgraphs to perform node classification tasks for typical control and ASD patients.The proposed algorithm was evaluated on the Autism Brain Imaging Data Exchange I(ABIDE I)dataset,achieving an accuracy of 91.62%and an area under the curve value of 95.74%.These results demon-strated its outstanding performance compared to other ASD diagnostic algorithms.
基金the PID2022‐137451OB‐I00 and PID2022‐137629OA‐I00 projects funded by the MICIU/AEIAEI/10.13039/501100011033 and by ERDF/EU.
文摘Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning.
文摘This paper addresses the common orthopedic trauma of spinal vertebral fractures and aims to enhance doctors’diagnostic efficiency.Therefore,a deep-learning-based automated diagnostic systemwithmulti-label segmentation is proposed to recognize the condition of vertebral fractures.The whole spine Computed Tomography(CT)image is segmented into the fracture,normal,and background using U-Net,and the fracture degree of each vertebra is evaluated(Genant semi-qualitative evaluation).The main work of this paper includes:First,based on the spatial configuration network(SCN)structure,U-Net is used instead of the SCN feature extraction network.The attention mechanismandthe residual connectionbetweenthe convolutional layers are added in the local network(LN)stage.Multiple filtering is added in the global network(GN)stage,and each layer of the LN decoder feature map is filtered separately using dot product,and the filtered features are re-convolved to obtain the GN output heatmap.Second,a network model with improved SCN(M-SCN)helps automatically localize the center-of-mass position of each vertebra,and the voxels around each localized vertebra were clipped,eliminating a large amount of redundant information(e.g.,background and other interfering vertebrae)and keeping the vertebrae to be segmented in the center of the image.Multilabel segmentation of the clipped portion was subsequently performed using U-Net.This paper uses VerSe’19,VerSe’20(using only data containing vertebral fractures),and private data(provided by Guizhou Orthopedic Hospital)for model training and evaluation.Compared with the original SCN network,the M-SCN reduced the prediction error rate by 1.09%and demonstrated the effectiveness of the improvement in ablation experiments.In the vertebral segmentation experiment,the Dice Similarity Coefficient(DSC)index reached 93.50%and the Maximum Symmetry Surface Distance(MSSD)index was 4.962 mm,with accuracy and recall of 95.82%and 91.73%,respectively.Fractured vertebrae were also marked as red and normal vertebrae were marked as white in the experiment,and the semi-qualitative assessment results of Genant were provided,as well as the results of spinal localization visualization and 3D reconstructed views of the spine to analyze the actual predictive ability of the model.It provides a promising tool for vertebral fracture detection.
基金This research received the support from Taif University Researchers Supporting Project Number(TURSP-2020/147),Taif university,Taif,Saudi Arabia.
文摘Lightweight deep convolutional neural networks(CNNs)present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients.Recently,advantages of portable Ultrasound(US)imaging such as simplicity and safe procedures have attracted many radiologists for scanning suspected COVID-19 cases.In this paper,a new framework of lightweight deep learning classifiers,namely COVID-LWNet is proposed to identify COVID-19 and pneumonia abnormalities in US images.Compared to traditional deep learning models,lightweight CNNs showed significant performance of real-time vision applications by using mobile devices with limited hardware resources.Four main lightweight deep learning models,namely MobileNets,ShuffleNets,MENet and MnasNet have been proposed to identify the health status of lungs using US images.Public image dataset(POCUS)was used to validate our proposed COVID-LWNet framework successfully.Three classes of infectious COVID-19,bacterial pneumonia,and the healthy lung were investigated in this study.The results showed that the performance of our proposed MnasNet classifier achieved the best accuracy score and shortest training time of 99.0%and 647.0 s,respectively.This paper demonstrates the feasibility of using our proposed COVID-LWNet framework as a new mobilebased radiological tool for clinical diagnosis of COVID-19 and other lung diseases.
文摘One of the fast-growing disease affecting women’s health seriously is breast cancer.It is highly essential to identify and detect breast cancer in the earlier stage.This paper used a novel advanced methodology than machine learning algorithms such as Deep learning algorithms to classify breast cancer accurately.Deep learning algorithms are fully automatic in learning,extracting,and classifying the features and are highly suitable for any image,from natural to medical images.Existing methods focused on using various conventional and machine learning methods for processing natural and medical images.It is inadequate for the image where the coarse structure matters most.Most of the input images are downscaled,where it is impossible to fetch all the hidden details to reach accuracy in classification.Whereas deep learning algorithms are high efficiency,fully automatic,have more learning capability using more hidden layers,fetch as much as possible hidden information from the input images,and provide an accurate prediction.Hence this paper uses AlexNet from a deep convolution neural network for classifying breast cancer in mammogram images.The performance of the proposed convolution network structure is evaluated by comparing it with the existing algorithms.