Sinus floor elevation with a lateral window approach requires bone graft(BG)to ensure sufficient bone mass,and it is necessary to measure and analyse the BG region for follow-up of postoperative patients.However,the B...Sinus floor elevation with a lateral window approach requires bone graft(BG)to ensure sufficient bone mass,and it is necessary to measure and analyse the BG region for follow-up of postoperative patients.However,the BG region from cone-beam computed tomography(CBCT)images is connected to the margin of the maxillary sinus,and its boundary is blurred.Common segmentation methods are usually performed manually by experienced doctors,and are complicated by challenges such as low efficiency and low precision.In this study,an auto-segmentation approach was applied to the BG region within the maxillary sinus based on an atrous spatial pyramid convolution(ASPC)network.The ASPC module was adopted using residual connections to compose multiple atrous convolutions,which could extract more features on multiple scales.Subsequently,a segmentation network of the BG region with multiple ASPC modules was established,which effectively improved the segmentation performance.Although the training data were insufficient,our networks still achieved good auto-segmentation results,with a dice coefficient(Dice)of 87.13%,an Intersection over Union(Iou)of 78.01%,and a sensitivity of 95.02%.Compared with other methods,our method achieved a better segmentation effect,and effectively reduced the misjudgement of segmentation.Our method can thus be used to implement automatic segmentation of the BG region and improve doctors’work efficiency,which is of great importance for developing preliminary studies on the measurement of postoperative BG within the maxillary sinus.展开更多
Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of cr...Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of crack detection in cement pavement improves,the depth and width of the network structure are significantly increased,which necessitates more computing power and storage space.This limitation hampers the practical implementation of crack detection models on various platforms,particularly portable devices like small mobile devices.To solve these problems,we propose a dual-encoder-based network architecture that focuses on extracting more comprehensive fracture feature information and combines cross-fusion modules and coordinated attention mechanisms formore efficient feature fusion.Firstly,we use small channel convolution to construct shallow feature extractionmodule(SFEM)to extract low-level feature information of cracks in cement pavement images,in order to obtainmore information about cracks in the shallowfeatures of images.In addition,we construct large kernel atrous convolution(LKAC)to enhance crack information,which incorporates coordination attention mechanism for non-crack information filtering,and large kernel atrous convolution with different cores,using different receptive fields to extract more detailed edge and context information.Finally,the three-stage feature map outputs from the shallow feature extraction module is cross-fused with the two-stage feature map outputs from the large kernel atrous convolution module,and the shallow feature and detailed edge feature are fully fused to obtain the final crack prediction map.We evaluate our method on three public crack datasets:DeepCrack,CFD,and Crack500.Experimental results on theDeepCrack dataset demonstrate the effectiveness of our proposed method compared to state-of-the-art crack detection methods,which achieves Precision(P)87.2%,Recall(R)87.7%,and F-score(F1)87.4%.Thanks to our lightweight crack detectionmodel,the parameter count of the model in real-world detection scenarios has been significantly reduced to less than 2M.This advancement also facilitates technical support for portable scene detection.展开更多
With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase ch...With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase chain reaction(rRT-PCR)test that is currently used does not provide such high accuracy or speed in the screening process.Among the good choices for an accurate and fast test to screen COVID-19 are deep learning techniques.In this study,a new convolutional neural network(CNN)framework for COVID-19 detection using computed tomography(CT)images is proposed.The EfficientNet architecture is applied as the backbone structure of the proposed network,in which feature maps with different scales are extracted from the input CT scan images.In addition,atrous convolution at different rates is applied to these multi-scale feature maps to generate denser features,which facilitates in obtaining COVID-19 findings in CT scan images.The proposed framework is also evaluated in this study using a public CT dataset containing 2482 CT scan images from patients of both classes(i.e.,COVID-19 and non-COVID-19).To augment the dataset using additional training examples,adversarial examples generation is performed.The proposed system validates its superiority over the state-of-the-art methods with values exceeding 99.10%in terms of several metrics,such as accuracy,precision,recall,and F1.The proposed system also exhibits good robustness,when it is trained using a small portion of data(20%),with an accuracy of 96.16%.展开更多
Objective To propose two novel methods based on deep learning for computer-aided tongue diagnosis,including tongue image segmentation and tongue color classification,improving their diagnostic accuracy.Methods LabelMe...Objective To propose two novel methods based on deep learning for computer-aided tongue diagnosis,including tongue image segmentation and tongue color classification,improving their diagnostic accuracy.Methods LabelMe was used to label the tongue mask and Snake model to optimize the labeling results.A new dataset was constructed for tongue image segmentation.Tongue color was marked to build a classified dataset for network training.In this research,the Inception+Atrous Spatial Pyramid Pooling(ASPP)+UNet(IAUNet)method was proposed for tongue image segmentation,based on the existing UNet,Inception,and atrous convolution.Moreover,the Tongue Color Classification Net(TCCNet)was constructed with reference to ResNet,Inception,and Triple-Loss.Several important measurement indexes were selected to evaluate and compare the effects of the novel and existing methods for tongue segmentation and tongue color classification.IAUNet was compared with existing mainstream methods such as UNet and DeepLabV3+for tongue segmentation.TCCNet for tongue color classification was compared with VGG16 and GoogLeNet.Results IAUNet can accurately segment the tongue from original images.The results showed that the Mean Intersection over Union(MIoU)of IAUNet reached 96.30%,and its Mean Pixel Accuracy(MPA),mean Average Precision(mAP),F1-Score,G-Score,and Area Under Curve(AUC)reached 97.86%,99.18%,96.71%,96.82%,and 99.71%,respectively,suggesting IAUNet produced better segmentation than other methods,with fewer parameters.Triplet-Loss was applied in the proposed TCCNet to separate different embedded colors.The experiment yielded ideal results,with F1-Score and mAP of the TCCNet reached 88.86% and 93.49%,respectively.Conclusion IAUNet based on deep learning for tongue segmentation is better than traditional ones.IAUNet can not only produce ideal tongue segmentation,but have better effects than those of PSPNet,SegNet,UNet,and DeepLabV3+,the traditional networks.As for tongue color classification,the proposed network,TCCNet,had better F1-Score and mAP values as compared with other neural networks such as VGG16 and GoogLeNet.展开更多
In recent years,a gain in popularity and significance of science understanding has been observed due to the high paced progress in computer vision techniques and technologies.The primary focus of computer vision based...In recent years,a gain in popularity and significance of science understanding has been observed due to the high paced progress in computer vision techniques and technologies.The primary focus of computer vision based scene understanding is to label each and every pixel in an image as the category of the object it belongs to.So it is required to combine segmentation and detection in a single framework.Recently many successful computer vision methods has been developed to aid scene understanding for a variety of real world application.Scene understanding systems typically involves detection and segmentation of different natural and manmade things.A lot of research has been performed in recent years,mostly with a focus on things(a well-defined objects that has shape,orientations and size)with a less focus on stuff classes(amorphous regions that are unclear and lack a shape,size or other characteristics Stuff region describes many aspects of scene,like type,situation,environment of scene etc.and hence can be very helpful in scene understanding.Existing methods for scene understanding still have to cover a challenging path to cope up with the challenges of computational time,accuracy and robustness for varying level of scene complexity.A robust scene understanding method has to effectively deal with imbalanced distribution of classes,overlapping objects,fuzzy object boundaries and poorly localized objects.The proposed method presents Panoptic Segmentation on Cityscapes Dataset.Mobilenet-V2 is used as a backbone for feature extraction that is pre-trained on ImageNet.MobileNet-V2 with state-of-art encoder-decoder architecture of DeepLabV3+with some customization and optimization is employed Atrous convolution along with Spatial Pyramid Pooling are also utilized in the proposed method to make it more accurate and robust.Very promising and encouraging results have been achieved that indicates the potential of the proposed method for robust scene understanding in a fast and reliable way.展开更多
Cultivated land extraction is essential for sustainable development and agriculture.In this paper,the network we propose is based on the encoder-decoder structure,which extracts the semantic segmentation neural networ...Cultivated land extraction is essential for sustainable development and agriculture.In this paper,the network we propose is based on the encoder-decoder structure,which extracts the semantic segmentation neural network of cultivated land from satellite images and uses it for agricultural automation solutions.The encoder consists of two part:the first is the modified Xception,it can used as the feature extraction network,and the second is the atrous convolution,it can used to expand the receptive field and the context information to extract richer feature information.The decoder part uses the conventional upsampling operation to restore the original resolution.In addition,we use the combination of BCE and Loves-hinge as a loss function to optimize the Intersection over Union(IoU).Experimental results show that the proposed network structure can solve the problem of cultivated land extraction in Yinchuan City.展开更多
Retinal images play an essential role in the early diagnosis of ophthalmic diseases.Automatic segmentation of retinal vessels in color fundus images is challenging due to the morphological differences between the reti...Retinal images play an essential role in the early diagnosis of ophthalmic diseases.Automatic segmentation of retinal vessels in color fundus images is challenging due to the morphological differences between the retinal vessels and the low-contrast background.At the same time,automated models struggle to capture representative and discriminative retinal vascular features.To fully utilize the structural information of the retinal blood vessels,we propose a novel deep learning network called Pre-Activated Convolution Residual and Triple Attention Mechanism Network(PCRTAM-Net).PCRTAM-Net uses the pre-activated dropout convolution residual method to improve the feature learning ability of the network.In addition,the residual atrous convolution spatial pyramid is integrated into both ends of the network encoder to extract multiscale information and improve blood vessel information flow.A triple attention mechanism is proposed to extract the structural information between vessel contexts and to learn long-range feature dependencies.We evaluate the proposed PCRTAM-Net on four publicly available datasets,DRIVE,CHASE_DB1,STARE,and HRF.Our model achieves state-of-the-art performance of 97.10%,97.70%,97.68%,and 97.14%for ACC and 83.05%,82.26%,84.64%,and 81.16%for F1,respectively.展开更多
Image segmentation is an important basic link of remote sensing interpretation.High-resolution remote sensing images contain complex object information.The application of traditional segmentation methods is greatly re...Image segmentation is an important basic link of remote sensing interpretation.High-resolution remote sensing images contain complex object information.The application of traditional segmentation methods is greatly restricted.In this paper, a remote sensing semantic segmentation algorithm is proposed basedon ResU-Net combined with Atrous convolution. The traditional U-Net semanticsegmentation network was improved as the backbone network, and the residualconvolution unitwas used to replace the originalU-Net convolution unit to increasethe depth of the network and avoid the disappearance of gradients. To detect morefeature information, a multi-branch hole convolution module was added betweenthe encoding and decoding modules to extract semantic features, and the expansionrate of the hole convolution was modified to make the network have a bettereffect on the small target category segmentation. Finally, the remote sensing imagewas classified by pixel to output the remote sensing image semantic segmentationresult. The experimental results show that the accuracy and interaction ratio of theproposed algorithm in the ISPRS Vaihingen dataset are improved, which verifiesits effectiveness.展开更多
基金the National Key Research and Development Program of China(No.2017YFB1302900)the National Natural Science Foundation of China(Nos.81971709,M-0019,and 82011530141)+2 种基金the Foundation of Science and Technology Commission of Shanghai Municipality(Nos.19510712200,and 20490740700)the Shanghai Jiao Tong University Foundation on Medical and Technological Joint Science Research(Nos.ZH2018ZDA15,YG2019ZDA06,and ZH2018QNA23)the 2020 Key Research Project of Xiamen Municipal Government(No.3502Z20201030)。
文摘Sinus floor elevation with a lateral window approach requires bone graft(BG)to ensure sufficient bone mass,and it is necessary to measure and analyse the BG region for follow-up of postoperative patients.However,the BG region from cone-beam computed tomography(CBCT)images is connected to the margin of the maxillary sinus,and its boundary is blurred.Common segmentation methods are usually performed manually by experienced doctors,and are complicated by challenges such as low efficiency and low precision.In this study,an auto-segmentation approach was applied to the BG region within the maxillary sinus based on an atrous spatial pyramid convolution(ASPC)network.The ASPC module was adopted using residual connections to compose multiple atrous convolutions,which could extract more features on multiple scales.Subsequently,a segmentation network of the BG region with multiple ASPC modules was established,which effectively improved the segmentation performance.Although the training data were insufficient,our networks still achieved good auto-segmentation results,with a dice coefficient(Dice)of 87.13%,an Intersection over Union(Iou)of 78.01%,and a sensitivity of 95.02%.Compared with other methods,our method achieved a better segmentation effect,and effectively reduced the misjudgement of segmentation.Our method can thus be used to implement automatic segmentation of the BG region and improve doctors’work efficiency,which is of great importance for developing preliminary studies on the measurement of postoperative BG within the maxillary sinus.
基金supported by the National Natural Science Foundation of China(No.62176034)the Science and Technology Research Program of Chongqing Municipal Education Commission(No.KJZD-M202300604)the Natural Science Foundation of Chongqing(Nos.cstc2021jcyj-msxmX0518,2023NSCQ-MSX1781).
文摘Automatic crack detection of cement pavement chiefly benefits from the rapid development of deep learning,with convolutional neural networks(CNN)playing an important role in this field.However,as the performance of crack detection in cement pavement improves,the depth and width of the network structure are significantly increased,which necessitates more computing power and storage space.This limitation hampers the practical implementation of crack detection models on various platforms,particularly portable devices like small mobile devices.To solve these problems,we propose a dual-encoder-based network architecture that focuses on extracting more comprehensive fracture feature information and combines cross-fusion modules and coordinated attention mechanisms formore efficient feature fusion.Firstly,we use small channel convolution to construct shallow feature extractionmodule(SFEM)to extract low-level feature information of cracks in cement pavement images,in order to obtainmore information about cracks in the shallowfeatures of images.In addition,we construct large kernel atrous convolution(LKAC)to enhance crack information,which incorporates coordination attention mechanism for non-crack information filtering,and large kernel atrous convolution with different cores,using different receptive fields to extract more detailed edge and context information.Finally,the three-stage feature map outputs from the shallow feature extraction module is cross-fused with the two-stage feature map outputs from the large kernel atrous convolution module,and the shallow feature and detailed edge feature are fully fused to obtain the final crack prediction map.We evaluate our method on three public crack datasets:DeepCrack,CFD,and Crack500.Experimental results on theDeepCrack dataset demonstrate the effectiveness of our proposed method compared to state-of-the-art crack detection methods,which achieves Precision(P)87.2%,Recall(R)87.7%,and F-score(F1)87.4%.Thanks to our lightweight crack detectionmodel,the parameter count of the model in real-world detection scenarios has been significantly reduced to less than 2M.This advancement also facilitates technical support for portable scene detection.
基金support provided from the Deanship of Scientific Research at King Saud University through the,Research Group No.(RG-1435-050.)。
文摘With the rapid spread of the coronavirus disease 2019(COVID-19)worldwide,the establishment of an accurate and fast process to diagnose the disease is important.The routine real-time reverse transcription-polymerase chain reaction(rRT-PCR)test that is currently used does not provide such high accuracy or speed in the screening process.Among the good choices for an accurate and fast test to screen COVID-19 are deep learning techniques.In this study,a new convolutional neural network(CNN)framework for COVID-19 detection using computed tomography(CT)images is proposed.The EfficientNet architecture is applied as the backbone structure of the proposed network,in which feature maps with different scales are extracted from the input CT scan images.In addition,atrous convolution at different rates is applied to these multi-scale feature maps to generate denser features,which facilitates in obtaining COVID-19 findings in CT scan images.The proposed framework is also evaluated in this study using a public CT dataset containing 2482 CT scan images from patients of both classes(i.e.,COVID-19 and non-COVID-19).To augment the dataset using additional training examples,adversarial examples generation is performed.The proposed system validates its superiority over the state-of-the-art methods with values exceeding 99.10%in terms of several metrics,such as accuracy,precision,recall,and F1.The proposed system also exhibits good robustness,when it is trained using a small portion of data(20%),with an accuracy of 96.16%.
基金Scientific Research Project of the Education Department of Hunan Province(20C1435)Open Fund Project for Computer Science and Technology of Hunan University of Chinese Medicine(2018JK05).
文摘Objective To propose two novel methods based on deep learning for computer-aided tongue diagnosis,including tongue image segmentation and tongue color classification,improving their diagnostic accuracy.Methods LabelMe was used to label the tongue mask and Snake model to optimize the labeling results.A new dataset was constructed for tongue image segmentation.Tongue color was marked to build a classified dataset for network training.In this research,the Inception+Atrous Spatial Pyramid Pooling(ASPP)+UNet(IAUNet)method was proposed for tongue image segmentation,based on the existing UNet,Inception,and atrous convolution.Moreover,the Tongue Color Classification Net(TCCNet)was constructed with reference to ResNet,Inception,and Triple-Loss.Several important measurement indexes were selected to evaluate and compare the effects of the novel and existing methods for tongue segmentation and tongue color classification.IAUNet was compared with existing mainstream methods such as UNet and DeepLabV3+for tongue segmentation.TCCNet for tongue color classification was compared with VGG16 and GoogLeNet.Results IAUNet can accurately segment the tongue from original images.The results showed that the Mean Intersection over Union(MIoU)of IAUNet reached 96.30%,and its Mean Pixel Accuracy(MPA),mean Average Precision(mAP),F1-Score,G-Score,and Area Under Curve(AUC)reached 97.86%,99.18%,96.71%,96.82%,and 99.71%,respectively,suggesting IAUNet produced better segmentation than other methods,with fewer parameters.Triplet-Loss was applied in the proposed TCCNet to separate different embedded colors.The experiment yielded ideal results,with F1-Score and mAP of the TCCNet reached 88.86% and 93.49%,respectively.Conclusion IAUNet based on deep learning for tongue segmentation is better than traditional ones.IAUNet can not only produce ideal tongue segmentation,but have better effects than those of PSPNet,SegNet,UNet,and DeepLabV3+,the traditional networks.As for tongue color classification,the proposed network,TCCNet,had better F1-Score and mAP values as compared with other neural networks such as VGG16 and GoogLeNet.
文摘In recent years,a gain in popularity and significance of science understanding has been observed due to the high paced progress in computer vision techniques and technologies.The primary focus of computer vision based scene understanding is to label each and every pixel in an image as the category of the object it belongs to.So it is required to combine segmentation and detection in a single framework.Recently many successful computer vision methods has been developed to aid scene understanding for a variety of real world application.Scene understanding systems typically involves detection and segmentation of different natural and manmade things.A lot of research has been performed in recent years,mostly with a focus on things(a well-defined objects that has shape,orientations and size)with a less focus on stuff classes(amorphous regions that are unclear and lack a shape,size or other characteristics Stuff region describes many aspects of scene,like type,situation,environment of scene etc.and hence can be very helpful in scene understanding.Existing methods for scene understanding still have to cover a challenging path to cope up with the challenges of computational time,accuracy and robustness for varying level of scene complexity.A robust scene understanding method has to effectively deal with imbalanced distribution of classes,overlapping objects,fuzzy object boundaries and poorly localized objects.The proposed method presents Panoptic Segmentation on Cityscapes Dataset.Mobilenet-V2 is used as a backbone for feature extraction that is pre-trained on ImageNet.MobileNet-V2 with state-of-art encoder-decoder architecture of DeepLabV3+with some customization and optimization is employed Atrous convolution along with Spatial Pyramid Pooling are also utilized in the proposed method to make it more accurate and robust.Very promising and encouraging results have been achieved that indicates the potential of the proposed method for robust scene understanding in a fast and reliable way.
基金support for this work are as follows:Ningxia Hui Autonomous Region Key Research and Development Program Project:Research and demonstration application of key technologies for intelligent monitoring of spatial planning based on high-scoring remote sensing(Project No.2018YBZD1629).
文摘Cultivated land extraction is essential for sustainable development and agriculture.In this paper,the network we propose is based on the encoder-decoder structure,which extracts the semantic segmentation neural network of cultivated land from satellite images and uses it for agricultural automation solutions.The encoder consists of two part:the first is the modified Xception,it can used as the feature extraction network,and the second is the atrous convolution,it can used to expand the receptive field and the context information to extract richer feature information.The decoder part uses the conventional upsampling operation to restore the original resolution.In addition,we use the combination of BCE and Loves-hinge as a loss function to optimize the Intersection over Union(IoU).Experimental results show that the proposed network structure can solve the problem of cultivated land extraction in Yinchuan City.
基金supported by the Open Funds from Guangxi Key Laboratory of Image and Graphic Intelligent Processing under Grant No.GIIP2209the National Natural Science Foundation of China under Grant Nos.62172120 and 62002082the Natural Science Foundation of Guangxi Province of China under Grant Nos.2019GXNSFAA245014 and 2020GXNSFBA238014.
文摘Retinal images play an essential role in the early diagnosis of ophthalmic diseases.Automatic segmentation of retinal vessels in color fundus images is challenging due to the morphological differences between the retinal vessels and the low-contrast background.At the same time,automated models struggle to capture representative and discriminative retinal vascular features.To fully utilize the structural information of the retinal blood vessels,we propose a novel deep learning network called Pre-Activated Convolution Residual and Triple Attention Mechanism Network(PCRTAM-Net).PCRTAM-Net uses the pre-activated dropout convolution residual method to improve the feature learning ability of the network.In addition,the residual atrous convolution spatial pyramid is integrated into both ends of the network encoder to extract multiscale information and improve blood vessel information flow.A triple attention mechanism is proposed to extract the structural information between vessel contexts and to learn long-range feature dependencies.We evaluate the proposed PCRTAM-Net on four publicly available datasets,DRIVE,CHASE_DB1,STARE,and HRF.Our model achieves state-of-the-art performance of 97.10%,97.70%,97.68%,and 97.14%for ACC and 83.05%,82.26%,84.64%,and 81.16%for F1,respectively.
文摘Image segmentation is an important basic link of remote sensing interpretation.High-resolution remote sensing images contain complex object information.The application of traditional segmentation methods is greatly restricted.In this paper, a remote sensing semantic segmentation algorithm is proposed basedon ResU-Net combined with Atrous convolution. The traditional U-Net semanticsegmentation network was improved as the backbone network, and the residualconvolution unitwas used to replace the originalU-Net convolution unit to increasethe depth of the network and avoid the disappearance of gradients. To detect morefeature information, a multi-branch hole convolution module was added betweenthe encoding and decoding modules to extract semantic features, and the expansionrate of the hole convolution was modified to make the network have a bettereffect on the small target category segmentation. Finally, the remote sensing imagewas classified by pixel to output the remote sensing image semantic segmentationresult. The experimental results show that the accuracy and interaction ratio of theproposed algorithm in the ISPRS Vaihingen dataset are improved, which verifiesits effectiveness.