In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR i...In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR is a reliable measure for the early diagnosis of Glaucoma.In this study,we developed a lightweight DNN model for OC and OD segmentation in retinal fundus images.Our DNN model is based on modifications to Anam-Net,incorporating an anamorphic depth embedding block.To reduce computational complexity,we employ a fixed filter size for all convolution layers in the encoder and decoder stages as the network deepens.This modification significantly reduces the number of trainable parameters,making the model lightweight and suitable for resource-constrained applications.We evaluate the performance of the developed model using two publicly available retinal image databases,namely RIM-ONE and Drishti-GS.The results demonstrate promising OC segmentation performance across most standard evaluation metrics while achieving analogous results for OD segmentation.We used two retinal fundus image databases named RIM-ONE and Drishti-GS that contained 159 images and 101 retinal images,respectively.For OD segmentation using the RIM-ONE we obtain an f1-score(F1),Jaccard coefficient(JC),and overlapping error(OE)of 0.950,0.9219,and 0.0781,respectively.Similarly,for OC segmentation using the same databases,we achieve scores of 0.8481(F1),0.7428(JC),and 0.2572(OE).Based on these experimental results and the significantly lower number of trainable parameters,we conclude that the developed model is highly suitable for the early diagnosis of glaucoma by accurately estimating the CDR.展开更多
The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernet...The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernetworks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we proposea lightweight convolutional neural network (CNN)-based encoder-decoderdeep learning model for accurate retinal vessels segmentation. The proposeddeep learning model consists of encoder-decoder architecture along withbottleneck layers that consist of depth-wise squeezing, followed by fullconvolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, whichwas tested on CT images for COVID-19 identification. For our lightweightmodel, we used a stack of two 3 × 3 convolution layers (without spatialpooling in between) instead of a single 3 × 3 convolution layer as proposedin Anam-Net to increase the receptive field and to reduce the trainableparameters. The proposed method includes fewer filters in all convolutionallayers than the original Anam-Net and does not have an increasing numberof filters for decreasing resolution. These modifications do not compromiseon the segmentation accuracy, but they do make the architecture significantlylighter in terms of the number of trainable parameters and computation time.The proposed architecture has comparatively fewer parameters (1.01M) thanAnam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the otherrecent works. The proposed model does not require any problem-specificpre- or post-processing, nor does it rely on handcrafted features. In addition,the attribute of being efficient in terms of segmentation accuracy as well aslightweight makes the proposed method a suitable candidate to be used in thescreening platforms at the point of care. We evaluated our proposed modelon open-access datasets namely, DRIVE, STARE, and CHASE_DB. Theexperimental results show that the proposed model outperforms several stateof-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoderdecoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the areaunder the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752,and 0.9906} on the CHASE_DB dataset. Additionally, we perform crosstraining experiments on the DRIVE and STARE datasets. The result of thisexperiment indicates the generalization ability and robustness of the proposedmodel.展开更多
基金funded byResearchers Supporting Project Number(RSPD2024R 553),King Saud University,Riyadh,Saudi Arabia.
文摘In this work,we aim to introduce some modifications to the Anam-Net deep neural network(DNN)model for segmenting optic cup(OC)and optic disc(OD)in retinal fundus images to estimate the cup-to-disc ratio(CDR).The CDR is a reliable measure for the early diagnosis of Glaucoma.In this study,we developed a lightweight DNN model for OC and OD segmentation in retinal fundus images.Our DNN model is based on modifications to Anam-Net,incorporating an anamorphic depth embedding block.To reduce computational complexity,we employ a fixed filter size for all convolution layers in the encoder and decoder stages as the network deepens.This modification significantly reduces the number of trainable parameters,making the model lightweight and suitable for resource-constrained applications.We evaluate the performance of the developed model using two publicly available retinal image databases,namely RIM-ONE and Drishti-GS.The results demonstrate promising OC segmentation performance across most standard evaluation metrics while achieving analogous results for OD segmentation.We used two retinal fundus image databases named RIM-ONE and Drishti-GS that contained 159 images and 101 retinal images,respectively.For OD segmentation using the RIM-ONE we obtain an f1-score(F1),Jaccard coefficient(JC),and overlapping error(OE)of 0.950,0.9219,and 0.0781,respectively.Similarly,for OC segmentation using the same databases,we achieve scores of 0.8481(F1),0.7428(JC),and 0.2572(OE).Based on these experimental results and the significantly lower number of trainable parameters,we conclude that the developed model is highly suitable for the early diagnosis of glaucoma by accurately estimating the CDR.
基金The authors extend their appreciation to the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia for funding this research work through the project number(DRI−KSU−415).
文摘The accurate segmentation of retinal vessels is a challenging taskdue to the presence of various pathologies as well as the low-contrast ofthin vessels and non-uniform illumination. In recent years, encoder-decodernetworks have achieved outstanding performance in retinal vessel segmentation at the cost of high computational complexity. To address the aforementioned challenges and to reduce the computational complexity, we proposea lightweight convolutional neural network (CNN)-based encoder-decoderdeep learning model for accurate retinal vessels segmentation. The proposeddeep learning model consists of encoder-decoder architecture along withbottleneck layers that consist of depth-wise squeezing, followed by fullconvolution, and finally depth-wise stretching. The inspiration for the proposed model is taken from the recently developed Anam-Net model, whichwas tested on CT images for COVID-19 identification. For our lightweightmodel, we used a stack of two 3 × 3 convolution layers (without spatialpooling in between) instead of a single 3 × 3 convolution layer as proposedin Anam-Net to increase the receptive field and to reduce the trainableparameters. The proposed method includes fewer filters in all convolutionallayers than the original Anam-Net and does not have an increasing numberof filters for decreasing resolution. These modifications do not compromiseon the segmentation accuracy, but they do make the architecture significantlylighter in terms of the number of trainable parameters and computation time.The proposed architecture has comparatively fewer parameters (1.01M) thanAnam-Net (4.47M), U-Net (31.05M), SegNet (29.50M), and most of the otherrecent works. The proposed model does not require any problem-specificpre- or post-processing, nor does it rely on handcrafted features. In addition,the attribute of being efficient in terms of segmentation accuracy as well aslightweight makes the proposed method a suitable candidate to be used in thescreening platforms at the point of care. We evaluated our proposed modelon open-access datasets namely, DRIVE, STARE, and CHASE_DB. Theexperimental results show that the proposed model outperforms several stateof-the-art methods, such as U-Net and its variants, fully convolutional network (FCN), SegNet, CCNet, ResWNet, residual connection-based encoderdecoder network (RCED-Net), and scale-space approx. network (SSANet) in terms of {dice coefficient, sensitivity (SN), accuracy (ACC), and the areaunder the ROC curve (AUC)} with the scores of {0.8184, 0.8561, 0.9669, and0.9868} on the DRIVE dataset, the scores of {0.8233, 0.8581, 0.9726, and0.9901} on the STARE dataset, and the scores of {0.8138, 0.8604, 0.9752,and 0.9906} on the CHASE_DB dataset. Additionally, we perform crosstraining experiments on the DRIVE and STARE datasets. The result of thisexperiment indicates the generalization ability and robustness of the proposedmodel.