期刊文献+
共找到27篇文章
< 1 2 >
每页显示 20 50 100
he Design and Implementation of Medical Image Processing System based on DICOM Format
1
作者 Jing HUA 《International Journal of Technology Management》 2014年第7期21-23,共3页
In order to fast transmission and processing of medical images and do not need to install client and plug-ins, the paper designed a kind of medical image reading system based on BS structure. This system improved the ... In order to fast transmission and processing of medical images and do not need to install client and plug-ins, the paper designed a kind of medical image reading system based on BS structure. This system improved the existing IWEB in the framework of PACS client image processing, medical image based on the service WEB completion port model. To realize the fast loading images with high concurrency, compared with the traditional WEB PACS, this system has the advantages of no client without plug-in installation, at the same time in the transmission and processing performance image has been greatly improved. 展开更多
关键词 DICOM DICOM browser Communication service program medical image processing
在线阅读 下载PDF
Novel Feature Extractor Framework in Conjunction with Supervised Three Class-XGBoost Algorithm for Osteosarcoma Detection from Whole Slide Medical Histopathology Images
2
作者 Tanzila Saba Muhammad Mujahid +2 位作者 Shaha Al-Otaibi Noor Ayesha Amjad Rehman Khan 《Computers, Materials & Continua》 2025年第2期3337-3353,共17页
Osteosarcomas are malignant neoplasms derived from undifferentiated osteogenic mesenchymal cells. It causes severe and permanent damage to human tissue and has a high mortality rate. The condition has the capacity to ... Osteosarcomas are malignant neoplasms derived from undifferentiated osteogenic mesenchymal cells. It causes severe and permanent damage to human tissue and has a high mortality rate. The condition has the capacity to occur in any bone;however, it often impacts long bones like the arms and legs. Prompt identification and prompt intervention are essential for augmenting patient longevity. However, the intricate composition and erratic placement of osteosarcoma provide difficulties for clinicians in accurately determining the scope of the afflicted area. There is a pressing requirement for developing an algorithm that can automatically detect bone tumors with tremendous accuracy. Therefore, in this study, we proposed a novel feature extractor framework associated with a supervised three-class XGBoost algorithm for the detection of osteosarcoma in whole slide histopathology images. This method allows for quicker and more effective data analysis. The first step involves preprocessing the imbalanced histopathology dataset, followed by augmentation and balancing utilizing two techniques: SMOTE and ADASYN. Next, a unique feature extraction framework is used to extract features, which are then inputted into the supervised three-class XGBoost algorithm for classification into three categories: non-tumor, viable tumor, and non-viable tumor. The experimental findings indicate that the proposed model exhibits superior efficiency, accuracy, and a more lightweight design in comparison to other current models for osteosarcoma detection. 展开更多
关键词 medical image processing deep learning healthcare image classification HISTOPATHOLOGY
暂未订购
A novel medical image data protection scheme for smart healthcare system
3
作者 Mujeeb Ur Rehman Arslan Shafique +6 位作者 Muhammad Shahbaz Khan Maha Driss Wadii Boulila Yazeed Yasin Ghadi Suresh Babu Changalasetty Majed Alhaisoni Jawad Ahmad 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第4期821-836,共16页
The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ... The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks. 展开更多
关键词 data analysis medical image processing SECURITY
在线阅读 下载PDF
Dual-Classifier Label Correction Network for Carotid Plaque Classification on Multi-Center Ultrasound Images
4
作者 Louyi Jiang Sulei Wang +2 位作者 Jiang Xie Haiya Wang Wei Shao 《Computers, Materials & Continua》 2025年第6期5445-5460,共16页
Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerabil... Carotid artery plaques represent a major contributor to the morbidity and mortality associated with cerebrovascular disease,and their clinical significance is largely determined by the risk linked to plaque vulnerability.Therefore,classifying plaque risk constitutes one of themost critical tasks in the clinicalmanagement of this condition.While classification models derived from individual medical centers have been extensively investigated,these singlecenter models often fail to generalize well to multi-center data due to variations in ultrasound images caused by differences in physician expertise and equipment.To address this limitation,a Dual-Classifier Label Correction Networkmodel(DCLCN)is proposed for the classification of carotid plaque ultrasound images acrossmultiplemedical centers.TheDCLCNdesigns amulti-center domain adaptationmodule that leverages a dual-classifier strategy to extract knowledge from both source and target centers,thereby reducing feature discrepancies through a domain adaptation layer.Additionally,to mitigate the impact of image noise,a label modeling and correction module is introduced to generate pseudo-labels for the target centers and iteratively refine them using an end-to-end correction mechanism.Experiments on the carotid plaque dataset collected fromthreemedical centers demonstrate that the DCLCN achieves commendable performance and robustness. 展开更多
关键词 Deep learning medical image processing carotid plaque classification multi-center data
在线阅读 下载PDF
Artificial Intelligence Based Prostate Cancer Classification Model Using Biomedical Images 被引量:2
5
作者 Areej A.Malibari Reem Alshahrani +3 位作者 Fahd N.Al-Wesabi Siwar Ben Haj Hassine Mimouna Abdullah Alkhonaini Anwer Mustafa Hilal 《Computers, Materials & Continua》 SCIE EI 2022年第8期3799-3813,共15页
Medical image processing becomes a hot research topic in healthcare sector for effective decision making and diagnoses of diseases.Magnetic resonance imaging(MRI)is a widely utilized tool for the classification and de... Medical image processing becomes a hot research topic in healthcare sector for effective decision making and diagnoses of diseases.Magnetic resonance imaging(MRI)is a widely utilized tool for the classification and detection of prostate cancer.Since the manual screening process of prostate cancer is difficult,automated diagnostic methods become essential.This study develops a novel Deep Learning based Prostate Cancer Classification(DTL-PSCC)model using MRI images.The presented DTL-PSCC technique encompasses EfficientNet based feature extractor for the generation of a set of feature vectors.In addition,the fuzzy k-nearest neighbour(FKNN)model is utilized for classification process where the class labels are allotted to the input MRI images.Moreover,the membership value of the FKNN model can be optimally tuned by the use of krill herd algorithm(KHA)which results in improved classification performance.In order to demonstrate the good classification outcome of the DTL-PSCC technique,a wide range of simulations take place on benchmark MRI datasets.The extensive comparative results ensured the betterment of the DTL-PSCC technique over the recent methods with the maximum accuracy of 85.09%. 展开更多
关键词 MRI images prostate cancer deep learning medical image processing metaheuristics krill herd algorithm
在线阅读 下载PDF
Dual-Branch-UNet: A Dual-Branch Convolutional Neural Network for Medical Image Segmentation 被引量:2
6
作者 Muwei Jian Ronghua Wu +2 位作者 Hongyu Chen Lanqi Fu Chengdong Yang 《Computer Modeling in Engineering & Sciences》 SCIE EI 2023年第10期705-716,共12页
In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intel... In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods. 展开更多
关键词 Convolutional neural network medical image processing retinal vessel segmentation
在线阅读 下载PDF
Artifacts Reduction Using Multi-Scale Feature Attention Network in Compressed Medical Images 被引量:1
7
作者 Seonjae Kim Dongsan Jun 《Computers, Materials & Continua》 SCIE EI 2022年第2期3267-3279,共13页
Medical image compression is one of the essential technologies to facilitate real-time medical data transmission in remote healthcare applications.In general,image compression can introduce undesired coding artifacts,... Medical image compression is one of the essential technologies to facilitate real-time medical data transmission in remote healthcare applications.In general,image compression can introduce undesired coding artifacts,such as blocking artifacts and ringing effects.In this paper,we proposed a Multi-Scale Feature Attention Network(MSFAN)with two essential parts,which are multi-scale feature extraction layers and feature attention layers to efficiently remove coding artifacts of compressed medical images.Multiscale feature extraction layers have four Feature Extraction(FE)blocks.Each FE block consists of five convolution layers and one CA block for weighted skip connection.In order to optimize the proposed network architectures,a variety of verification tests were conducted using validation dataset.We used Computer Vision Center-Clinic Database(CVC-ClinicDB)consisting of 612 colonoscopy medical images to evaluate the enhancement of image restoration.The proposedMSFAN can achieve improved PSNR gains as high as 0.25 and 0.24 dB on average compared to DnCNNand DCSC,respectively. 展开更多
关键词 medical image processing convolutional neural network deep learning TELEMEDICINE artifact reduction image restoration
在线阅读 下载PDF
A Triple-Channel Encrypted Hybrid Fusion Technique to Improve Security of Medical Images 被引量:1
8
作者 Ahmed S.Salama Mohamed Amr Mokhtar +2 位作者 Mazhar B.Tayel Esraa Eldesouky Ahmed Ali 《Computers, Materials & Continua》 SCIE EI 2021年第7期431-446,共16页
Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Tr... Assuring medical images protection and robustness is a compulsory necessity nowadays.In this paper,a novel technique is proposed that fuses the wavelet-induced multi-resolution decomposition of the Discrete Wavelet Transform(DWT)with the energy compaction of the Discrete Wavelet Transform(DCT).The multi-level Encryption-based Hybrid Fusion Technique(EbhFT)aims to achieve great advances in terms of imperceptibility and security of medical images.A DWT disintegrated sub-band of a cover image is reformed simultaneously using the DCT transform.Afterwards,a 64-bit hex key is employed to encrypt the host image as well as participate in the second key creation process to encode the watermark.Lastly,a PN-sequence key is formed along with a supplementary key in the third layer of the EbHFT.Thus,the watermarked image is generated by enclosing both keys into DWT and DCT coefficients.The fusions ability of the proposed EbHFT technique makes the best use of the distinct privileges of using both DWT and DCT methods.In order to validate the proposed technique,a standard dataset of medical images is used.Simulation results show higher performance of the visual quality(i.e.,57.65)for the watermarked forms of all types of medical images.In addition,EbHFT robustness outperforms an existing scheme tested for the same dataset in terms of Normalized Correlation(NC).Finally,extra protection for digital images from against illegal replicating and unapproved tampering using the proposed technique. 展开更多
关键词 medical image processing digital image watermarking discrete wavelet transforms discrete cosine transform encryption image fusion hybrid fusion technique
在线阅读 下载PDF
UDT:U-shaped deformable transformer for subarachnoid haemorrhage image segmentation
9
作者 Wei Xie Lianghao Jin +4 位作者 Shiqi Hua Hao Sun Bo Sun Zhigang Tu Jun Liu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第3期756-768,共13页
Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irreg... Subarachnoid haemorrhage(SAH),mostly caused by the rupture of intracranial aneu-rysm,is a common disease with a high fatality rate.SAH lesions are generally diffusely distributed,showing a variety of scales with irregular edges.The complex characteristics of lesions make SAH segmentation a challenging task.To cope with these difficulties,a u-shaped deformable transformer(UDT)is proposed for SAH segmentation.Specifically,first,a multi-scale deformable attention(MSDA)module is exploited to model the diffuseness and scale-variant characteristics of SAH lesions,where the MSDA module can fuse features in different scales and adjust the attention field of each element dynamically to generate discriminative multi-scale features.Second,the cross deformable attention-based skip connection(CDASC)module is designed to model the irregular edge char-acteristic of SAH lesions,where the CDASC module can utilise the spatial details from encoder features to refine the spatial information of decoder features.Third,the MSDA and CDASC modules are embedded into the backbone Res-UNet to construct the proposed UDT.Extensive experiments are conducted on the self-built SAH-CT dataset and two public medical datasets(GlaS and MoNuSeg).Experimental results show that the presented UDT achieves the state-of-the-art performance. 展开更多
关键词 image segmentation medical image processing
在线阅读 下载PDF
Efficient Wound Classification Using YOLO11n:A Lightweight Deep Learning Approach
10
作者 Fathe Jeribi Ayesha Siddiqa +2 位作者 Hareem Kibriya Ali Tahir Nadim Rana 《Computers, Materials & Continua》 2025年第10期955-982,共28页
Wound classification is a critical task in healthcare,requiring accurate and efficient diagnostic tools to support clinicians.In this paper,we investigated the effectiveness of the YOLO11n model in classifying differe... Wound classification is a critical task in healthcare,requiring accurate and efficient diagnostic tools to support clinicians.In this paper,we investigated the effectiveness of the YOLO11n model in classifying different types of wound images.This study presents the training and evaluation of a lightweight YOLO11n model for automated wound classification using the AZH dataset,which includes six wound classes:Background(BG),Normal Skin(N),Diabetic(D),Pressure(P),Surgical(S),and Venous(V).The model’s architecture,optimized through experiments with varying batch sizes and epochs,ensures efficient deployment in resource-constrained environments.The model’s architecture is discussed in detail.The visual representation of different blocks of the model is also presented.The visual results of training and validation are shown.Our experiments emphasize the model’s ability to classify wounds with high precision and recall,leveraging its lightweight architecture for efficient computation.The findings demonstrate that fine-tuning hyperparameters has a significant impact on the model’s detection performance,making it suitable for real-world medical applications.This research contributes to advancing automated wound classification through deep learning,while addressing challenges such as dataset imbalance and classification intricacies.We conducted a comprehensive evaluation of YOLO11n for wound classification across multiple configurations,including 6,5,4,and 3-way classification,using the AZH dataset.YOLO11n acquires the highest F1 score and mean Average Precision of 0.836 and 0.893 for classifying wounds into six classes,respectively.It outperforms the existing methods in classifying wounds using the AZH dataset.Moreover,Gradient-weighted Class Activation Mapping(Grad-CAM)is applied to the YOLO11n model to visualize class-relevant regions in wound images. 展开更多
关键词 Deep learning medical image processing diabetic foot ulcer wound classification YOLO11
在线阅读 下载PDF
Improving Fundus Detection Precision in Diabetic Retinopathy Using Derivative-Based Deep Neural Networks
11
作者 Asma Aldrees Hong Min +2 位作者 Ashit Kumar Dutta Yousef Ibrahim Daradkeh Mohd Anjum 《Computer Modeling in Engineering & Sciences》 2025年第3期2487-2511,共25页
Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affec... Fundoscopic diagnosis involves assessing the proper functioning of the eye’s nerves,blood vessels,retinal health,and the impact of diabetes on the optic nerves.Fundus disorders are a major global health concern,affecting millions of people worldwide due to their widespread occurrence.Fundus photography generates machine-based eye images that assist in diagnosing and treating ocular diseases such as diabetic retinopathy.As a result,accurate fundus detection is essential for early diagnosis and effective treatment,helping to prevent severe complications and improve patient outcomes.To address this need,this article introduces a Derivative Model for Fundus Detection using Deep NeuralNetworks(DMFD-DNN)to enhance diagnostic precision.Thismethod selects key features for fundus detection using the least derivative,which identifies features correlating with stored fundus images.Feature filtering relies on the minimum derivative,determined by extracting both similar and varying textures.In this research,the DNN model was integrated with the derivative model.Fundus images were segmented,features were extracted,and the DNN was iteratively trained to identify fundus regions reliably.The goal was to improve the precision of fundoscopic diagnosis by training the DNN incrementally,taking into account the least possible derivative across iterations,and using outputs from previous cycles.The hidden layer of the neural network operates on the most significant derivative,which may reduce precision across iterations.These derivatives are treated as inaccurate,and the model is subsequently trained using selective features and their corresponding extractions.The proposed model outperforms previous techniques in detecting fundus regions,achieving 94.98%accuracy and 91.57%sensitivity,with a minimal error rate of 5.43%.It significantly reduces feature extraction time to 1.462 s and minimizes computational overhead,thereby improving operational efficiency and scalability.Ultimately,the proposed model enhances diagnostic precision and reduces errors,leading to more effective fundus dysfunction diagnosis and treatment. 展开更多
关键词 Deep neural network feature extraction fundus detection medical image processing
在线阅读 下载PDF
Two-view attention-guided convolutional neural network for mammographic image classification 被引量:2
12
作者 Lilei Sun Jie Wen +4 位作者 Junqian Wang Yong Zhao Bob Zhang Jian Wu Yong Xu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第2期453-467,共15页
Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory class... Deep learning has been widely used in the field of mammographic image classification owing to its superiority in automatic feature extraction.However,general deep learning models cannot achieve very satisfactory classification results on mammographic images because these models are not specifically designed for mammographic images and do not take the specific traits of these images into account.To exploit the essential discriminant information of mammographic images,we propose a novel classification method based on a convolutional neural network.Specifically,the proposed method designs two branches to extract the discriminative features from mammographic images from the mediolateral oblique and craniocaudal(CC)mammographic views.The features extracted from the two-view mammographic images contain complementary information that enables breast cancer to be more easily distinguished.Moreover,the attention block is introduced to capture the channel-wise information by adjusting the weight of each feature map,which is beneficial to emphasising the important features of mammographic images.Furthermore,we add a penalty term based on the fuzzy cluster algorithm to the cross-entropy function,which improves the generalisation ability of the classification model by maximising the interclass distance and minimising the intraclass distance of the samples.The experimental results on The Digital database for Screening Mammography INbreast and MIAS mammography databases illustrate that the proposed method achieves the best classification performance and is more robust than the compared state-ofthe-art classification methods. 展开更多
关键词 convolutional neural network deep learning medical image processing mammographic image
在线阅读 下载PDF
Mixed-decomposed convolutional network:A lightweight yet efficient convolutional neural network for ocular disease recognition 被引量:1
13
作者 Xiaoqing Zhang Xiao Wu +5 位作者 Zunjie Xiao Lingxi Hu Zhongxi Qiu Qingyang Sun Risa Higashita Jiang Liu 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第2期319-332,共14页
Eye health has become a global health concern and attracted broad attention.Over the years,researchers have proposed many state-of-the-art convolutional neural networks(CNNs)to assist ophthalmologists in diagnosing oc... Eye health has become a global health concern and attracted broad attention.Over the years,researchers have proposed many state-of-the-art convolutional neural networks(CNNs)to assist ophthalmologists in diagnosing ocular diseases efficiently and precisely.However,most existing methods were dedicated to constructing sophisticated CNNs,inevitably ignoring the trade-off between performance and model complexity.To alleviate this paradox,this paper proposes a lightweight yet efficient network architecture,mixeddecomposed convolutional network(MDNet),to recognise ocular diseases.In MDNet,we introduce a novel mixed-decomposed depthwise convolution method,which takes advantage of depthwise convolution and depthwise dilated convolution operations to capture low-resolution and high-resolution patterns by using fewer computations and fewer parameters.We conduct extensive experiments on the clinical anterior segment optical coherence tomography(AS-OCT),LAG,University of California San Diego,and CIFAR-100 datasets.The results show our MDNet achieves a better trade-off between the performance and model complexity than efficient CNNs including MobileNets and MixNets.Specifically,our MDNet outperforms MobileNets by 2.5%of accuracy by using 22%fewer parameters and 30%fewer computations on the AS-OCT dataset. 展开更多
关键词 artificial intelligence deep learning deep neural networks image analysis image classification medical applications medical image processing
在线阅读 下载PDF
Retinal Vessel Segmentation via Adversarial Learning and Iterative Refinement 被引量:1
14
作者 顾闻 徐奕 《Journal of Shanghai Jiaotong university(Science)》 EI 2024年第1期73-80,共8页
Retinal vessel segmentation is a challenging medical task owing to small size of dataset,micro blood vessels and low image contrast.To address these issues,we introduce a novel convolutional neural network in this pap... Retinal vessel segmentation is a challenging medical task owing to small size of dataset,micro blood vessels and low image contrast.To address these issues,we introduce a novel convolutional neural network in this paper,which takes the advantage of both adversarial learning and recurrent neural network.An iterative design of network with recurrent unit is performed to refine the segmentation results from input retinal image gradually.Recurrent unit preserves high-level semantic information for feature reuse,so as to output a sufficiently refined segmentation map instead of a coarse mask.Moreover,an adversarial loss is imposing the integrity and connectivity constraints on the segmented vessel regions,thus greatly reducing topology errors of segmentation.The experimental results on the DRIVE dataset show that our method achieves area under curve and sensitivity of 98.17%and 80.64%,respectively.Our method achieves superior performance in retinal vessel segmentation compared with other existing state-of-the-art methods. 展开更多
关键词 medical image processing retinal image segmentation adversarial learning iterative refinement
原文传递
A deep convolutional neural network for diabetic retinopathy detection via mining local and long-range dependence 被引量:1
15
作者 Xiaoling Luo Wei Wang +4 位作者 Yong Xu Zhihui Lai Xiaopeng Jin Bob Zhang David Zhang 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第1期153-166,共14页
Diabetic retinopathy(DR),the main cause of irreversible blindness,is one of the most common complications of diabetes.At present,deep convolutional neural networks have achieved promising performance in automatic DR d... Diabetic retinopathy(DR),the main cause of irreversible blindness,is one of the most common complications of diabetes.At present,deep convolutional neural networks have achieved promising performance in automatic DR detection tasks.The convolution operation of methods is a local cross-correlation operation,whose receptive field de-termines the size of the local neighbourhood for processing.However,for retinal fundus photographs,there is not only the local information but also long-distance dependence between the lesion features(e.g.hemorrhages and exudates)scattered throughout the whole image.The proposed method incorporates correlations between long-range patches into the deep learning framework to improve DR detection.Patch-wise re-lationships are used to enhance the local patch features since lesions of DR usually appear as plaques.The Long-Range unit in the proposed network with a residual structure can be flexibly embedded into other trained networks.Extensive experimental results demon-strate that the proposed approach can achieve higher accuracy than existing state-of-the-art models on Messidor and EyePACS datasets. 展开更多
关键词 image classification medical image processing pattern recognition
在线阅读 下载PDF
DeepGCN based on variable multi‐graph and multimodal data for ASD diagnosis 被引量:1
16
作者 Shuaiqi Liu Siqi Wang +3 位作者 Chaolei Sun Bing Li Shuihua Wang Fei Li 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第4期879-893,共15页
Diagnosing individuals with autism spectrum disorder(ASD)accurately faces great chal-lenges in clinical practice,primarily due to the data's high heterogeneity and limited sample size.To tackle this issue,the auth... Diagnosing individuals with autism spectrum disorder(ASD)accurately faces great chal-lenges in clinical practice,primarily due to the data's high heterogeneity and limited sample size.To tackle this issue,the authors constructed a deep graph convolutional network(GCN)based on variable multi‐graph and multimodal data(VMM‐DGCN)for ASD diagnosis.Firstly,the functional connectivity matrix was constructed to extract primary features.Then,the authors constructed a variable multi‐graph construction strategy to capture the multi‐scale feature representations of each subject by utilising convolutional filters with varying kernel sizes.Furthermore,the authors brought the non‐imaging in-formation into the feature representation at each scale and constructed multiple population graphs based on multimodal data by fully considering the correlation between subjects.After extracting the deeper features of population graphs using the deep GCN(DeepGCN),the authors fused the node features of multiple subgraphs to perform node classification tasks for typical control and ASD patients.The proposed algorithm was evaluated on the Autism Brain Imaging Data Exchange I(ABIDE I)dataset,achieving an accuracy of 91.62%and an area under the curve value of 95.74%.These results demon-strated its outstanding performance compared to other ASD diagnostic algorithms. 展开更多
关键词 machine learning medical image processing medical signal processing
在线阅读 下载PDF
Improved organs at risk segmentation based on modified U‐Net with self‐attention and consistency regularisation
17
作者 Maksym Manko Anton Popov +1 位作者 Juan Manuel Gorriz Javier Ramirez 《CAAI Transactions on Intelligence Technology》 SCIE EI 2024年第4期850-865,共16页
Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR... Cancer is one of the leading causes of death in the world,with radiotherapy as one of the treatment options.Radiotherapy planning starts with delineating the affected area from healthy organs,called organs at risk(OAR).A new approach to automatic OAR seg-mentation in the chest cavity in Computed Tomography(CT)images is presented.The proposed approach is based on the modified U‐Net architecture with the ResNet‐34 encoder,which is the baseline adopted in this work.The new two‐branch CS‐SA U‐Net architecture is proposed,which consists of two parallel U‐Net models in which self‐attention blocks with cosine similarity as query‐key similarity function(CS‐SA)blocks are inserted between the encoder and decoder,which enabled the use of con-sistency regularisation.The proposed solution demonstrates state‐of‐the‐art performance for the problem of OAR segmentation in CT images on the publicly available SegTHOR benchmark dataset in terms of a Dice coefficient(oesophagus-0.8714,heart-0.9516,trachea-0.9286,aorta-0.9510)and Hausdorff distance(oesophagus-0.2541,heart-0.1514,trachea-0.1722,aorta-0.1114)and significantly outperforms the baseline.The current approach is demonstrated to be viable for improving the quality of OAR segmentation for radiotherapy planning. 展开更多
关键词 3‐D computer vision deep learning deep neural networks image segmentation medical image processing object segmentation
暂未订购
Spinal Vertebral Fracture Detection and Fracture Level Assessment Based on Deep Learning
18
作者 Yuhang Wang Zhiqin He +3 位作者 Qinmu Wu Tingsheng Lu Yu Tang Maoyun Zhu 《Computers, Materials & Continua》 SCIE EI 2024年第4期1377-1398,共22页
This paper addresses the common orthopedic trauma of spinal vertebral fractures and aims to enhance doctors’diagnostic efficiency.Therefore,a deep-learning-based automated diagnostic systemwithmulti-label segmentatio... This paper addresses the common orthopedic trauma of spinal vertebral fractures and aims to enhance doctors’diagnostic efficiency.Therefore,a deep-learning-based automated diagnostic systemwithmulti-label segmentation is proposed to recognize the condition of vertebral fractures.The whole spine Computed Tomography(CT)image is segmented into the fracture,normal,and background using U-Net,and the fracture degree of each vertebra is evaluated(Genant semi-qualitative evaluation).The main work of this paper includes:First,based on the spatial configuration network(SCN)structure,U-Net is used instead of the SCN feature extraction network.The attention mechanismandthe residual connectionbetweenthe convolutional layers are added in the local network(LN)stage.Multiple filtering is added in the global network(GN)stage,and each layer of the LN decoder feature map is filtered separately using dot product,and the filtered features are re-convolved to obtain the GN output heatmap.Second,a network model with improved SCN(M-SCN)helps automatically localize the center-of-mass position of each vertebra,and the voxels around each localized vertebra were clipped,eliminating a large amount of redundant information(e.g.,background and other interfering vertebrae)and keeping the vertebrae to be segmented in the center of the image.Multilabel segmentation of the clipped portion was subsequently performed using U-Net.This paper uses VerSe’19,VerSe’20(using only data containing vertebral fractures),and private data(provided by Guizhou Orthopedic Hospital)for model training and evaluation.Compared with the original SCN network,the M-SCN reduced the prediction error rate by 1.09%and demonstrated the effectiveness of the improvement in ablation experiments.In the vertebral segmentation experiment,the Dice Similarity Coefficient(DSC)index reached 93.50%and the Maximum Symmetry Surface Distance(MSSD)index was 4.962 mm,with accuracy and recall of 95.82%and 91.73%,respectively.Fractured vertebrae were also marked as red and normal vertebrae were marked as white in the experiment,and the semi-qualitative assessment results of Genant were provided,as well as the results of spinal localization visualization and 3D reconstructed views of the spine to analyze the actual predictive ability of the model.It provides a promising tool for vertebral fracture detection. 展开更多
关键词 Deep learning vertebral fracture detection medical image processing
暂未订购
Lightweight Transfer Learning Models for Ultrasound-Guided Classification of COVID-19 Patients 被引量:2
19
作者 Mohamed Esmail Karar Omar Reyad +2 位作者 Mohammed Abd-Elnaby Abdel-Haleem Abdel-Aty Marwa Ahmed Shouman 《Computers, Materials & Continua》 SCIE EI 2021年第11期2295-2312,共18页
Lightweight deep convolutional neural networks(CNNs)present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients.Recently,advantages of portable Ultrasound(US)imaging su... Lightweight deep convolutional neural networks(CNNs)present a good solution to achieve fast and accurate image-guided diagnostic procedures of COVID-19 patients.Recently,advantages of portable Ultrasound(US)imaging such as simplicity and safe procedures have attracted many radiologists for scanning suspected COVID-19 cases.In this paper,a new framework of lightweight deep learning classifiers,namely COVID-LWNet is proposed to identify COVID-19 and pneumonia abnormalities in US images.Compared to traditional deep learning models,lightweight CNNs showed significant performance of real-time vision applications by using mobile devices with limited hardware resources.Four main lightweight deep learning models,namely MobileNets,ShuffleNets,MENet and MnasNet have been proposed to identify the health status of lungs using US images.Public image dataset(POCUS)was used to validate our proposed COVID-LWNet framework successfully.Three classes of infectious COVID-19,bacterial pneumonia,and the healthy lung were investigated in this study.The results showed that the performance of our proposed MnasNet classifier achieved the best accuracy score and shortest training time of 99.0%and 647.0 s,respectively.This paper demonstrates the feasibility of using our proposed COVID-LWNet framework as a new mobilebased radiological tool for clinical diagnosis of COVID-19 and other lung diseases. 展开更多
关键词 CORONAVIRUS medical image processing artificial intelligence ULTRASOUND
在线阅读 下载PDF
Breast Mammogram Analysis and Classification Using Deep Convolution Neural Network 被引量:1
20
作者 V.Ulagamuthalvi G.Kulanthaivel +1 位作者 A.Balasundaram Arun Kumar Sivaraman 《Computer Systems Science & Engineering》 SCIE EI 2022年第10期275-289,共15页
One of the fast-growing disease affecting women’s health seriously is breast cancer.It is highly essential to identify and detect breast cancer in the earlier stage.This paper used a novel advanced methodology than m... One of the fast-growing disease affecting women’s health seriously is breast cancer.It is highly essential to identify and detect breast cancer in the earlier stage.This paper used a novel advanced methodology than machine learning algorithms such as Deep learning algorithms to classify breast cancer accurately.Deep learning algorithms are fully automatic in learning,extracting,and classifying the features and are highly suitable for any image,from natural to medical images.Existing methods focused on using various conventional and machine learning methods for processing natural and medical images.It is inadequate for the image where the coarse structure matters most.Most of the input images are downscaled,where it is impossible to fetch all the hidden details to reach accuracy in classification.Whereas deep learning algorithms are high efficiency,fully automatic,have more learning capability using more hidden layers,fetch as much as possible hidden information from the input images,and provide an accurate prediction.Hence this paper uses AlexNet from a deep convolution neural network for classifying breast cancer in mammogram images.The performance of the proposed convolution network structure is evaluated by comparing it with the existing algorithms. 展开更多
关键词 medical image processing deep learning convolution neural network breast cancer feature extraction classification
在线阅读 下载PDF
上一页 1 2 下一页 到第
使用帮助 返回顶部