Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a chall...Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a challenge for accurate segmentation.In this paper,we propose a 3D semantic segmentation network for neuronal soma segmentation to address this issue.Using an encoding-decoding structure,we introduce a Multi-Scale feature extraction and Adaptive Weighting fusion module(MSAW)after each encoding block.The MSAW module can not only emphasize the fine structures via an upsampling strategy,but also provide pixel-wise weights to measure the importance of the multi-scale features.Additionally,a dynamic convolution instead of normal convolution is employed to better adapt the network to input data with different distributions.The proposed MSAW-based semantic segmentation network(MSAW-Net)was evaluated on three neuronal soma images from mouse brain and one neuronal soma image from macaque brain,demonstrating the efficiency of the proposed method.It achieved an F1 score of 91.8%on Fezf2-2A-CreER dataset,97.1%on LSL-H2B-GFP dataset,82.8%on Thy1-EGFP-Mline dataset,and 86.9%on macaque dataset,achieving improvements over the 3D U-Net model by 3.1%,3.3%,3.9%,and 2.3%,respectively.展开更多
Scleral vessels on the surface of the human eye can provide valuable information about potential diseases or dysfunctions of specific organs,and vessel segmentation is a key step in characterizing the scleral vessels....Scleral vessels on the surface of the human eye can provide valuable information about potential diseases or dysfunctions of specific organs,and vessel segmentation is a key step in characterizing the scleral vessels.However,accurate segmentation of blood vessels in the scleral images is a challenging task due to the intricate texture,tenuous structure,and erratic network of the scleral vessels.In this work,we propose a CNN-Transformer hybrid network named SVSNet for automatic scleral vessel segmentation.Following the typical U-shape encoder-decoder architecture,the SVSNet integrates a Sobel edge detection module to provide edge prior and further combines the Atrous Spatial Pyramid Pooling module to enhance its ability to extract vessels of various sizes.At the end of the encoding path,a vision Transformer module is incorporated to capture the global context and improve the continuity of the vessel network.To validate the effectiveness of the proposed SVSNet,comparative experiments are conducted on two public scleral image datasets,and the results show that the SVSNet outperforms other state-of-the-art models.Further experiments on three public retinal image datasets demonstrate that the SVSNet can be easily applied to other vessel datasets with good generalization capability.展开更多
Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to ...Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to the inability to effectively capture global information from images,CNNs can easily lead to loss of contours and textures in segmentation results.Notice that the transformer model can effectively capture the properties of long-range dependencies in the image,and furthermore,combining the CNN and the transformer can effectively extract local details and global contextual features of the image.Motivated by this,we propose a multi-branch and multi-scale attention network(M2ANet)for medical image segmentation,whose architecture consists of three components.Specifically,in the first component,we construct an adaptive multi-branch patch module for parallel extraction of image features to reduce information loss caused by downsampling.In the second component,we apply residual block to the well-known convolutional block attention module to enhance the network’s ability to recognize important features of images and alleviate the phenomenon of gradient vanishing.In the third component,we design a multi-scale feature fusion module,in which we adopt adaptive average pooling and position encoding to enhance contextual features,and then multi-head attention is introduced to further enrich feature representation.Finally,we validate the effectiveness and feasibility of the proposed M2ANet method through comparative experiments on four benchmark medical image segmentation datasets,particularly in the context of preserving contours and textures.展开更多
Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training per...Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training performance, the ability to effectively exploit the dataset, and the ability to adapt to complex environments when deploying the model. By utilizing the knowledge distillation techniques, the article strives to overcome the above challenges with the inheritance of the advantages of both the teacher model and the student model. More precisely, the ResNet152-PSP-Net model’s characteristics are utilized to train the ResNet18-PSP-Net model. Pyramid pooling blocks are utilized to decode multi-scale feature maps, creating a complete semantic map inference. The student model not only preserves the strong segmentation performance from the teacher model but also improves the inference speed of the prediction results. The proposed method exhibits a clear advantage over conventional convolutional neural network (CNN) models, as evident from the conducted experiments. Furthermore, the proposed model also shows remarkable improvement in processing speed when compared with light-weight models such as MobileNetV2 and EfficientNet based on latency and throughput parameters. The proposed KD-SegNet model obtains an accuracy of 96.3% and a mIoU (mean Intersection over Union) of 77%, outperforming the performance of existing models by more than 15% on the same training dataset. The suggested method has an average training time that is only 0.51 times less than same field models, while still achieving comparable segmentation performance. Hence, the semantic segmentation frames are collected, forming the motion trajectory for the system in the environment. Overall, this architecture shows great promise for the development of knowledge-based systems for MR’s navigation.展开更多
Surgical image segmentation serves as the foundation for laparoscopic surgical navigation technol-ogy.The indistinct local features of biological tissues in laparoscopic image pose challenges for image segmentation.To...Surgical image segmentation serves as the foundation for laparoscopic surgical navigation technol-ogy.The indistinct local features of biological tissues in laparoscopic image pose challenges for image segmentation.To address this issue,we develop an image segmentation network tailored for laparoscopic surgery.Firstly,we introduce the Mixed Attention Enhancement(MAE)module that sequentially conducts the Channel Attention Enhancement(CAE)module and the Global Feature Enhancement(GFE)module linked in series.The CAE module enhances the network's perception of prominent channels,allowing feature maps to exhibit clear local features.The GFE module is capable of extracting global features from both the height and width dimensions of images and integrating them into three-dimensional features.This enhancement improves the network's ability to capture global features,thereby facilitating the inference of regions with indistinct local features.Secondly,we propose the Multi-scale Feature Fusion(MFF)module.This module expands the feature map into various scales,further enlarging the network's receptive field and enhancing perception of features at multiple scales.In addition,we tested the proposed network on the EndoVis 2018 and a human minimally invasive liver resection image segmentation dataset,comparing it against six other advanced image segmentation networks.The comparative test results demonstrate that the proposed network achieves the most advanced performance on both datasets,proving its potential in improving surgical image segmentation outcome.展开更多
Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global featu...Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global feature.The transformer can extract the global information well but adapting it to small medical datasets is challenging and its computational complexity can be heavy.In this work,a serial and parallel network is proposed for the accurate 3D medical image segmentation by combining CNN and transformer and promoting feature interactions across various semantic levels.The core components of the proposed method include the cross window self-attention based transformer(CWST)and multi-scale local enhanced(MLE)modules.The CWST module enhances the global context understanding by partitioning 3D images into non-overlapping windows and calculating sparse global attention between windows.The MLE module selectively fuses features by computing the voxel attention between different branch features,and uses convolution to strengthen the dense local information.The experiments on the prostate,atrium,and pancreas MR/CT image datasets consistently demonstrate the advantage of the proposed method over six popular segmentation models in both qualitative evaluation and quantitative indexes such as dice similarity coefficient,Intersection over Union,95%Hausdorff distance and average symmetric surface distance.展开更多
Quantitative analysis of clinical function parameters from MRI images is crucial for diagnosing and assessing cardiovascular disease.However,the manual calculation of these parameters is challenging due to the high va...Quantitative analysis of clinical function parameters from MRI images is crucial for diagnosing and assessing cardiovascular disease.However,the manual calculation of these parameters is challenging due to the high variability among patients and the time-consuming nature of the process.In this study,the authors introduce a framework named MultiJSQ,comprising the feature presentation network(FRN)and the indicator prediction network(IEN),which is designed for simultaneous joint segmentation and quantification.The FRN is tailored for representing global image features,facilitating the direct acquisition of left ventricle(LV)contour images through pixel classification.Additionally,the IEN incorporates specifically designed modules to extract relevant clinical indices.The authors’method considers the interdependence of different tasks,demonstrating the validity of these relationships and yielding favourable results.Through extensive experiments on cardiac MR images from 145 patients,MultiJSQ achieves impressive outcomes,with low mean absolute errors of 124 mm^(2),1.72 mm,and 1.21 mm for areas,dimensions,and regional wall thicknesses,respectively,along with a Dice metric score of 0.908.The experimental findings underscore the excellent performance of our framework in LV segmentation and quantification,highlighting its promising clinical application prospects.展开更多
Liver tumors segmentation from computed tomography (CT) images is an essential task for diagnosis and treatments of liver cancer. However, it is difficult owing to the variability of appearances, fuzzy boundaries, het...Liver tumors segmentation from computed tomography (CT) images is an essential task for diagnosis and treatments of liver cancer. However, it is difficult owing to the variability of appearances, fuzzy boundaries, heterogeneous densities, shapes and sizes of lesions. In this paper, an automatic method based on convolutional neural networks (CNNs) is presented to segment lesions from CT images. The CNNs is one of deep learning models with some convolutional filters which can learn hierarchical features from data. We compared the CNNs model to popular machine learning algorithms: AdaBoost, Random Forests (RF), and support vector machine (SVM). These classifiers were trained by handcrafted features containing mean, variance, and contextual features. Experimental evaluation was performed on 30 portal phase enhanced CT images using leave-one-out cross validation. The average Dice Similarity Coefficient (DSC), precision, and recall achieved of 80.06% ± 1.63%, 82.67% ± 1.43%, and 84.34% ± 1.61%, respectively. The results show that the CNNs method has better performance than other methods and is promising in liver tumor segmentation.展开更多
AIM: To explore a segmentation algorithm based on deep learning to achieve accurate diagnosis and treatment of patients with retinal fluid.METHODS: A two-dimensional(2D) fully convolutional network for retinal segment...AIM: To explore a segmentation algorithm based on deep learning to achieve accurate diagnosis and treatment of patients with retinal fluid.METHODS: A two-dimensional(2D) fully convolutional network for retinal segmentation was employed. In order to solve the category imbalance in retinal optical coherence tomography(OCT) images, the network parameters and loss function based on the 2D fully convolutional network were modified. For this network, the correlations of corresponding positions among adjacent images in space are ignored. Thus, we proposed a three-dimensional(3D) fully convolutional network for segmentation in the retinal OCT images.RESULTS: The algorithm was evaluated according to segmentation accuracy, Kappa coefficient, and F1 score. For the 3D fully convolutional network proposed in this paper, the overall segmentation accuracy rate is 99.56%, Kappa coefficient is 98.47%, and F1 score of retinal fluid is 95.50%. CONCLUSION: The OCT image segmentation algorithm based on deep learning is primarily founded on the 2D convolutional network. The 3D network architecture proposed in this paper reduces the influence of category imbalance, realizes end-to-end segmentation of volume images, and achieves optimal segmentation results. The segmentation maps are practically the same as the manual annotations of doctors, and can provide doctors with more accurate diagnostic data.展开更多
This paper proposes a hybrid technique for color image segmentation. First an input image is converted to the image of CIE L*a*b* color space. The color features "a" and "b" of CIE L^*a^*b^* are then fed int...This paper proposes a hybrid technique for color image segmentation. First an input image is converted to the image of CIE L*a*b* color space. The color features "a" and "b" of CIE L^*a^*b^* are then fed into fuzzy C-means (FCM) clustering which is an unsupervised method. The labels obtained from the clustering method FCM are used as a target of the supervised feed forward neural network. The network is trained by the Levenberg-Marquardt back-propagation algorithm, and evaluates its performance using mean square error and regression analysis. The main issues of clustering methods are determining the number of clusters and cluster validity measures. This paper presents a method namely co-occurrence matrix based algorithm for finding the number of clusters and silhouette index values that are used for cluster validation. The proposed method is tested on various color images obtained from the Berkeley database. The segmentation results from the proposed method are validated and the classification accuracy is evaluated by the parameters sensitivity, specificity, and accuracy.展开更多
High-throughput maize phenotyping at both organ and plant levels plays a key role in molecular breeding for increasing crop yields. Although the rapid development of light detection and ranging(Li DAR) provides a new ...High-throughput maize phenotyping at both organ and plant levels plays a key role in molecular breeding for increasing crop yields. Although the rapid development of light detection and ranging(Li DAR) provides a new way to characterize three-dimensional(3 D) plant structure, there is a need to develop robust algorithms for extracting 3 D phenotypic traits from Li DAR data to assist in gene identification and selection. Accurate 3 D phenotyping in field environments remains challenging, owing to difficulties in segmentation of organs and individual plants in field terrestrial Li DAR data. We describe a two-stage method that combines both convolutional neural networks(CNNs) and morphological characteristics to segment stems and leaves of individual maize plants in field environments. It initially extracts stem points using the Point CNN model and obtains stem instances by fitting 3 D cylinders to the points. It then segments the field Li DAR point cloud into individual plants using local point densities and 3 D morphological structures of maize plants. The method was tested using 40 samples from field observations and showed high accuracy in the segmentation of both organs(F-score =0.8207) and plants(Fscore =0.9909). The effectiveness of terrestrial Li DAR for phenotyping at organ(including leaf area and stem position) and individual plant(including individual height and crown width) levels in field environments was evaluated. The accuracies of derived stem position(position error =0.0141 m), plant height(R^(2)>0.99), crown width(R^(2)>0.90), and leaf area(R^(2)>0.85) allow investigating plant structural and functional phenotypes in a high-throughput way. This CNN-based solution overcomes the major challenges in organ-level phenotypic trait extraction associated with the organ segmentation, and potentially contributes to studies of plant phenomics and precision agriculture.展开更多
To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates ...To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.展开更多
Magnetic Resonance Imaging (MRI) is an important diagnostic technique for early detection of brain Tumor and the classification of brain Tumor from MRI image is a challenging research work because of its different sha...Magnetic Resonance Imaging (MRI) is an important diagnostic technique for early detection of brain Tumor and the classification of brain Tumor from MRI image is a challenging research work because of its different shapes, location and image intensities. For successful classification, the segmentation method is required to separate Tumor. Then important features are extracted from the segmented Tumor that is used to classify the Tumor. In this work, an efficient multilevel segmentation method is developed combining optimal thresholding and watershed segmentation technique followed by a morphological operation to separate the Tumor. Convolutional Neural Network (CNN) is then applied for feature extraction and finally, the Kernel Support Vector Machine (KSVM) is utilized for resultant classification that is justified by our experimental evaluation. Experimental results show that the proposed method effectively detect and classify the Tumor as cancerous or non-cancerous with promising accuracy.展开更多
This paper focuses on the image segmentation with probabilistic neural networks(PNNs).Back propagation neural networks(BpNNs)and multi perceptron neural networks(MLPs)are also considered in this study.Especially,this ...This paper focuses on the image segmentation with probabilistic neural networks(PNNs).Back propagation neural networks(BpNNs)and multi perceptron neural networks(MLPs)are also considered in this study.Especially,this paper investigates the implementation of PNNs in image segmentation and optimal processing of image segmentation with a PNN.The comparison between image segmentations with PNNs and with other neural networks is given.The experimental results show that PNNs can be successfully applied to image segmentation for good results.展开更多
Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variation...Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variations between the support and query images.Existing approaches utilize 4D convolutions to mine semantic correspondence between the support and query images.However,they still suffer from heavy computation,sparse correspondence,and large memory.We propose axial assembled correspondence network(AACNet)to alleviate these issues.The key point of AACNet is the proposed axial assembled 4D kernel,which constructs the basic block for semantic correspondence encoder(SCE).Furthermore,we propose the deblurring equations to provide more robust correspondence for the aforementioned SCE and design a novel fusion module to mix correspondences in a learnable manner.Experiments on PASCAL-5~i reveal that our AACNet achieves a mean intersection-over-union score of 65.9%for 1-shot segmentation and 70.6%for 5-shot segmentation,surpassing the state-of-the-art method by 5.8%and 5.0%respectively.展开更多
In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intel...In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods.展开更多
Breast cancer positions as the most well-known threat and the main source of malignant growth-related morbidity and mortality throughout the world.It is apical of all new cancer incidences analyzed among females.Two f...Breast cancer positions as the most well-known threat and the main source of malignant growth-related morbidity and mortality throughout the world.It is apical of all new cancer incidences analyzed among females.Two features substantially inuence the classication accuracy of malignancy and benignity in automated cancer diagnostics.These are the precision of tumor segmentation and appropriateness of extracted attributes required for the diagnosis.In this research,the authors have proposed a ResU-Net(Residual U-Network)model for breast tumor segmentation.The proposed methodology renders augmented,and precise identication of tumor regions and produces accurate breast tumor segmentation in contrast-enhanced MR images.Furthermore,the proposed framework also encompasses the residual network technique,which subsequently enhances the performance and displays the improved training process.Over and above,the performance of ResU-Net has experimentally been analyzed with conventional U-Net,FCN8,FCN32.Algorithm performance is evaluated in the form of dice coefcient and MIoU(Mean Intersection of Union),accuracy,loss,sensitivity,specicity,F1score.Experimental results show that ResU-Net achieved validation accuracy&dice coefcient value of 73.22%&85.32%respectively on the Rider Breast MRI dataset and outperformed as compared to the other algorithms used in experimentation.展开更多
Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of...Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of the U-Net expansive path is to map low-resolution encoder feature maps to full input resolution feature maps.However,the consecutive deconvolution and convolutional operations in the expansive path lead to the loss of some high-level information.More high-level information can make the segmentationmore accurate.In this paper,we propose MU-Net,a novel,multi-path upsampling convolution network to retain more high-level information.The MU-Net mainly consists of three parts:contracting path,skip connection,and multi-expansive paths.The proposed MU-Net architecture is evaluated based on three different medical imaging datasets.Our experiments show that MU-Net improves the segmentation performance of U-Net-based methods on different datasets.At the same time,the computational efficiency is significantly improved by reducing the number of parameters by more than half.展开更多
This paper concerns the problem of object segmentation in real-time for picking system. A region proposal method inspired by human glance based on the convolutional neural network is proposed to select promising regio...This paper concerns the problem of object segmentation in real-time for picking system. A region proposal method inspired by human glance based on the convolutional neural network is proposed to select promising regions, allowing more processing is reserved only for these regions. The speed of object segmentation is significantly improved by the region proposal method.By the combination of the region proposal method based on the convolutional neural network and superpixel method, the category and location information can be used to segment objects and image redundancy is significantly reduced. The processing time is reduced considerably by this to achieve the real time. Experiments show that the proposed method can segment the interested target object in real time on an ordinary laptop.展开更多
An image segmentation algorithm of the restrained fuzzy Kohonen clustering network (RFKCN) based on high- dimension fuzzy character is proposed. The algorithm includes two steps. The first step is the fuzzification ...An image segmentation algorithm of the restrained fuzzy Kohonen clustering network (RFKCN) based on high- dimension fuzzy character is proposed. The algorithm includes two steps. The first step is the fuzzification of pixels in which two redundant images are built by fuzzy mean value and fuzzy median value. The second step is to construct a three-dimensional (3-D) feature vector of redundant images and their original images and cluster the feature vector through RFKCN, to realize image seg- mentation. The proposed algorithm fully takes into account not only gray distribution information of pixels, but also relevant information and fuzzy information among neighboring pixels in constructing 3- D character space. Based on the combination of competitiveness, redundancy and complementary of the information, the proposed algorithm improves the accuracy of clustering. Theoretical anal- yses and experimental results demonstrate that the proposed algorithm has a good segmentation performance.展开更多
基金supported by the STI2030-Major-Projects(No.2021ZD0200104)the National Natural Science Foundations of China under Grant 61771437.
文摘Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a challenge for accurate segmentation.In this paper,we propose a 3D semantic segmentation network for neuronal soma segmentation to address this issue.Using an encoding-decoding structure,we introduce a Multi-Scale feature extraction and Adaptive Weighting fusion module(MSAW)after each encoding block.The MSAW module can not only emphasize the fine structures via an upsampling strategy,but also provide pixel-wise weights to measure the importance of the multi-scale features.Additionally,a dynamic convolution instead of normal convolution is employed to better adapt the network to input data with different distributions.The proposed MSAW-based semantic segmentation network(MSAW-Net)was evaluated on three neuronal soma images from mouse brain and one neuronal soma image from macaque brain,demonstrating the efficiency of the proposed method.It achieved an F1 score of 91.8%on Fezf2-2A-CreER dataset,97.1%on LSL-H2B-GFP dataset,82.8%on Thy1-EGFP-Mline dataset,and 86.9%on macaque dataset,achieving improvements over the 3D U-Net model by 3.1%,3.3%,3.9%,and 2.3%,respectively.
基金supported by the National Key Research and Development Program of China(2022YFC3502301 and 2022YFC3502300)the National Natural Science Foundation of China(52475546)+1 种基金the R&D Program of Beijing Municipal Education Commission(KM202311232021)the Young Backbone Teacher Support Plan of Beijing Information Science&Technology University(YBT202410).
文摘Scleral vessels on the surface of the human eye can provide valuable information about potential diseases or dysfunctions of specific organs,and vessel segmentation is a key step in characterizing the scleral vessels.However,accurate segmentation of blood vessels in the scleral images is a challenging task due to the intricate texture,tenuous structure,and erratic network of the scleral vessels.In this work,we propose a CNN-Transformer hybrid network named SVSNet for automatic scleral vessel segmentation.Following the typical U-shape encoder-decoder architecture,the SVSNet integrates a Sobel edge detection module to provide edge prior and further combines the Atrous Spatial Pyramid Pooling module to enhance its ability to extract vessels of various sizes.At the end of the encoding path,a vision Transformer module is incorporated to capture the global context and improve the continuity of the vessel network.To validate the effectiveness of the proposed SVSNet,comparative experiments are conducted on two public scleral image datasets,and the results show that the SVSNet outperforms other state-of-the-art models.Further experiments on three public retinal image datasets demonstrate that the SVSNet can be easily applied to other vessel datasets with good generalization capability.
基金supported by the Natural Science Foundation of the Anhui Higher Education Institutions of China(Grant Nos.2023AH040149 and 2024AH051915)the Anhui Provincial Natural Science Foundation(Grant No.2208085MF168)+1 种基金the Science and Technology Innovation Tackle Plan Project of Maanshan(Grant No.2024RGZN001)the Scientific Research Fund Project of Anhui Medical University(Grant No.2023xkj122).
文摘Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to the inability to effectively capture global information from images,CNNs can easily lead to loss of contours and textures in segmentation results.Notice that the transformer model can effectively capture the properties of long-range dependencies in the image,and furthermore,combining the CNN and the transformer can effectively extract local details and global contextual features of the image.Motivated by this,we propose a multi-branch and multi-scale attention network(M2ANet)for medical image segmentation,whose architecture consists of three components.Specifically,in the first component,we construct an adaptive multi-branch patch module for parallel extraction of image features to reduce information loss caused by downsampling.In the second component,we apply residual block to the well-known convolutional block attention module to enhance the network’s ability to recognize important features of images and alleviate the phenomenon of gradient vanishing.In the third component,we design a multi-scale feature fusion module,in which we adopt adaptive average pooling and position encoding to enhance contextual features,and then multi-head attention is introduced to further enrich feature representation.Finally,we validate the effectiveness and feasibility of the proposed M2ANet method through comparative experiments on four benchmark medical image segmentation datasets,particularly in the context of preserving contours and textures.
基金funded by Hanoi University of Science and Technology(HUST)under project number T2023-PC-008.
文摘Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training performance, the ability to effectively exploit the dataset, and the ability to adapt to complex environments when deploying the model. By utilizing the knowledge distillation techniques, the article strives to overcome the above challenges with the inheritance of the advantages of both the teacher model and the student model. More precisely, the ResNet152-PSP-Net model’s characteristics are utilized to train the ResNet18-PSP-Net model. Pyramid pooling blocks are utilized to decode multi-scale feature maps, creating a complete semantic map inference. The student model not only preserves the strong segmentation performance from the teacher model but also improves the inference speed of the prediction results. The proposed method exhibits a clear advantage over conventional convolutional neural network (CNN) models, as evident from the conducted experiments. Furthermore, the proposed model also shows remarkable improvement in processing speed when compared with light-weight models such as MobileNetV2 and EfficientNet based on latency and throughput parameters. The proposed KD-SegNet model obtains an accuracy of 96.3% and a mIoU (mean Intersection over Union) of 77%, outperforming the performance of existing models by more than 15% on the same training dataset. The suggested method has an average training time that is only 0.51 times less than same field models, while still achieving comparable segmentation performance. Hence, the semantic segmentation frames are collected, forming the motion trajectory for the system in the environment. Overall, this architecture shows great promise for the development of knowledge-based systems for MR’s navigation.
基金supported by National Key Research and Development Program of China(2022YFB4700700)the Dalian Deng Feng Program:key medical specialties in construction funded by the People's Government of Dalian Municipality,China([2021]243)the Liaoning Provincial Natural Science Foundation,China(2023JH2/101300102).
文摘Surgical image segmentation serves as the foundation for laparoscopic surgical navigation technol-ogy.The indistinct local features of biological tissues in laparoscopic image pose challenges for image segmentation.To address this issue,we develop an image segmentation network tailored for laparoscopic surgery.Firstly,we introduce the Mixed Attention Enhancement(MAE)module that sequentially conducts the Channel Attention Enhancement(CAE)module and the Global Feature Enhancement(GFE)module linked in series.The CAE module enhances the network's perception of prominent channels,allowing feature maps to exhibit clear local features.The GFE module is capable of extracting global features from both the height and width dimensions of images and integrating them into three-dimensional features.This enhancement improves the network's ability to capture global features,thereby facilitating the inference of regions with indistinct local features.Secondly,we propose the Multi-scale Feature Fusion(MFF)module.This module expands the feature map into various scales,further enlarging the network's receptive field and enhancing perception of features at multiple scales.In addition,we tested the proposed network on the EndoVis 2018 and a human minimally invasive liver resection image segmentation dataset,comparing it against six other advanced image segmentation networks.The comparative test results demonstrate that the proposed network achieves the most advanced performance on both datasets,proving its potential in improving surgical image segmentation outcome.
基金National Key Research and Development Program of China,Grant/Award Number:2018YFE0206900China Postdoctoral Science Foundation,Grant/Award Number:2023M731204+2 种基金The Open Project of Key Laboratory for Quality Evaluation of Ultrasound Surgical Equipment of National Medical Products Administration,Grant/Award Number:SMDTKL-2023-1-01The Hubei Province Key Research and Development Project,Grant/Award Number:2023BCB007CAAI-Huawei MindSpore Open Fund。
文摘Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global feature.The transformer can extract the global information well but adapting it to small medical datasets is challenging and its computational complexity can be heavy.In this work,a serial and parallel network is proposed for the accurate 3D medical image segmentation by combining CNN and transformer and promoting feature interactions across various semantic levels.The core components of the proposed method include the cross window self-attention based transformer(CWST)and multi-scale local enhanced(MLE)modules.The CWST module enhances the global context understanding by partitioning 3D images into non-overlapping windows and calculating sparse global attention between windows.The MLE module selectively fuses features by computing the voxel attention between different branch features,and uses convolution to strengthen the dense local information.The experiments on the prostate,atrium,and pancreas MR/CT image datasets consistently demonstrate the advantage of the proposed method over six popular segmentation models in both qualitative evaluation and quantitative indexes such as dice similarity coefficient,Intersection over Union,95%Hausdorff distance and average symmetric surface distance.
基金Hefei Municipal Natural Science Foundation,Grant/Award Number:2022009Suqian Guiding Program Project,Grant/Award Number:Z202309Suqian Traditional Chinese Medicine Science and Technology Plan,Grant/Award Number:MS202301。
文摘Quantitative analysis of clinical function parameters from MRI images is crucial for diagnosing and assessing cardiovascular disease.However,the manual calculation of these parameters is challenging due to the high variability among patients and the time-consuming nature of the process.In this study,the authors introduce a framework named MultiJSQ,comprising the feature presentation network(FRN)and the indicator prediction network(IEN),which is designed for simultaneous joint segmentation and quantification.The FRN is tailored for representing global image features,facilitating the direct acquisition of left ventricle(LV)contour images through pixel classification.Additionally,the IEN incorporates specifically designed modules to extract relevant clinical indices.The authors’method considers the interdependence of different tasks,demonstrating the validity of these relationships and yielding favourable results.Through extensive experiments on cardiac MR images from 145 patients,MultiJSQ achieves impressive outcomes,with low mean absolute errors of 124 mm^(2),1.72 mm,and 1.21 mm for areas,dimensions,and regional wall thicknesses,respectively,along with a Dice metric score of 0.908.The experimental findings underscore the excellent performance of our framework in LV segmentation and quantification,highlighting its promising clinical application prospects.
文摘Liver tumors segmentation from computed tomography (CT) images is an essential task for diagnosis and treatments of liver cancer. However, it is difficult owing to the variability of appearances, fuzzy boundaries, heterogeneous densities, shapes and sizes of lesions. In this paper, an automatic method based on convolutional neural networks (CNNs) is presented to segment lesions from CT images. The CNNs is one of deep learning models with some convolutional filters which can learn hierarchical features from data. We compared the CNNs model to popular machine learning algorithms: AdaBoost, Random Forests (RF), and support vector machine (SVM). These classifiers were trained by handcrafted features containing mean, variance, and contextual features. Experimental evaluation was performed on 30 portal phase enhanced CT images using leave-one-out cross validation. The average Dice Similarity Coefficient (DSC), precision, and recall achieved of 80.06% ± 1.63%, 82.67% ± 1.43%, and 84.34% ± 1.61%, respectively. The results show that the CNNs method has better performance than other methods and is promising in liver tumor segmentation.
基金Supported by National Science Foundation of China(No.81800878)Interdisciplinary Program of Shanghai Jiao Tong University(No.YG2017QN24)+1 种基金Key Technological Research Projects of Songjiang District(No.18sjkjgg24)Bethune Langmu Ophthalmological Research Fund for Young and Middle-aged People(No.BJ-LM2018002J)
文摘AIM: To explore a segmentation algorithm based on deep learning to achieve accurate diagnosis and treatment of patients with retinal fluid.METHODS: A two-dimensional(2D) fully convolutional network for retinal segmentation was employed. In order to solve the category imbalance in retinal optical coherence tomography(OCT) images, the network parameters and loss function based on the 2D fully convolutional network were modified. For this network, the correlations of corresponding positions among adjacent images in space are ignored. Thus, we proposed a three-dimensional(3D) fully convolutional network for segmentation in the retinal OCT images.RESULTS: The algorithm was evaluated according to segmentation accuracy, Kappa coefficient, and F1 score. For the 3D fully convolutional network proposed in this paper, the overall segmentation accuracy rate is 99.56%, Kappa coefficient is 98.47%, and F1 score of retinal fluid is 95.50%. CONCLUSION: The OCT image segmentation algorithm based on deep learning is primarily founded on the 2D convolutional network. The 3D network architecture proposed in this paper reduces the influence of category imbalance, realizes end-to-end segmentation of volume images, and achieves optimal segmentation results. The segmentation maps are practically the same as the manual annotations of doctors, and can provide doctors with more accurate diagnostic data.
文摘This paper proposes a hybrid technique for color image segmentation. First an input image is converted to the image of CIE L*a*b* color space. The color features "a" and "b" of CIE L^*a^*b^* are then fed into fuzzy C-means (FCM) clustering which is an unsupervised method. The labels obtained from the clustering method FCM are used as a target of the supervised feed forward neural network. The network is trained by the Levenberg-Marquardt back-propagation algorithm, and evaluates its performance using mean square error and regression analysis. The main issues of clustering methods are determining the number of clusters and cluster validity measures. This paper presents a method namely co-occurrence matrix based algorithm for finding the number of clusters and silhouette index values that are used for cluster validation. The proposed method is tested on various color images obtained from the Berkeley database. The segmentation results from the proposed method are validated and the classification accuracy is evaluated by the parameters sensitivity, specificity, and accuracy.
基金supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (XDA24020202)the National Key Research and Development Program of China(2017YFA0604300)+2 种基金the National Natural Science Foundation of China (U1811464 and 41875122)the Western Talents(2018XBYJRC004)the Guangdong Top Young Talents(2017TQ04Z359)。
文摘High-throughput maize phenotyping at both organ and plant levels plays a key role in molecular breeding for increasing crop yields. Although the rapid development of light detection and ranging(Li DAR) provides a new way to characterize three-dimensional(3 D) plant structure, there is a need to develop robust algorithms for extracting 3 D phenotypic traits from Li DAR data to assist in gene identification and selection. Accurate 3 D phenotyping in field environments remains challenging, owing to difficulties in segmentation of organs and individual plants in field terrestrial Li DAR data. We describe a two-stage method that combines both convolutional neural networks(CNNs) and morphological characteristics to segment stems and leaves of individual maize plants in field environments. It initially extracts stem points using the Point CNN model and obtains stem instances by fitting 3 D cylinders to the points. It then segments the field Li DAR point cloud into individual plants using local point densities and 3 D morphological structures of maize plants. The method was tested using 40 samples from field observations and showed high accuracy in the segmentation of both organs(F-score =0.8207) and plants(Fscore =0.9909). The effectiveness of terrestrial Li DAR for phenotyping at organ(including leaf area and stem position) and individual plant(including individual height and crown width) levels in field environments was evaluated. The accuracies of derived stem position(position error =0.0141 m), plant height(R^(2)>0.99), crown width(R^(2)>0.90), and leaf area(R^(2)>0.85) allow investigating plant structural and functional phenotypes in a high-throughput way. This CNN-based solution overcomes the major challenges in organ-level phenotypic trait extraction associated with the organ segmentation, and potentially contributes to studies of plant phenomics and precision agriculture.
基金the National Natural Science Foundation of China(No.81830052)the Shanghai Natural Science Foundation of China(No.20ZR1438300)the Shanghai Science and Technology Support Project(No.18441900500),China。
文摘To overcome the computational burden of processing three-dimensional(3 D)medical scans and the lack of spatial information in two-dimensional(2 D)medical scans,a novel segmentation method was proposed that integrates the segmentation results of three densely connected 2 D convolutional neural networks(2 D-CNNs).In order to combine the lowlevel features and high-level features,we added densely connected blocks in the network structure design so that the low-level features will not be missed as the network layer increases during the learning process.Further,in order to resolve the problems of the blurred boundary of the glioma edema area,we superimposed and fused the T2-weighted fluid-attenuated inversion recovery(FLAIR)modal image and the T2-weighted(T2)modal image to enhance the edema section.For the loss function of network training,we improved the cross-entropy loss function to effectively avoid network over-fitting.On the Multimodal Brain Tumor Image Segmentation Challenge(BraTS)datasets,our method achieves dice similarity coefficient values of 0.84,0.82,and 0.83 on the BraTS2018 training;0.82,0.85,and 0.83 on the BraTS2018 validation;and 0.81,0.78,and 0.83 on the BraTS2013 testing in terms of whole tumors,tumor cores,and enhancing cores,respectively.Experimental results showed that the proposed method achieved promising accuracy and fast processing,demonstrating good potential for clinical medicine.
文摘Magnetic Resonance Imaging (MRI) is an important diagnostic technique for early detection of brain Tumor and the classification of brain Tumor from MRI image is a challenging research work because of its different shapes, location and image intensities. For successful classification, the segmentation method is required to separate Tumor. Then important features are extracted from the segmented Tumor that is used to classify the Tumor. In this work, an efficient multilevel segmentation method is developed combining optimal thresholding and watershed segmentation technique followed by a morphological operation to separate the Tumor. Convolutional Neural Network (CNN) is then applied for feature extraction and finally, the Kernel Support Vector Machine (KSVM) is utilized for resultant classification that is justified by our experimental evaluation. Experimental results show that the proposed method effectively detect and classify the Tumor as cancerous or non-cancerous with promising accuracy.
文摘This paper focuses on the image segmentation with probabilistic neural networks(PNNs).Back propagation neural networks(BpNNs)and multi perceptron neural networks(MLPs)are also considered in this study.Especially,this paper investigates the implementation of PNNs in image segmentation and optimal processing of image segmentation with a PNN.The comparison between image segmentations with PNNs and with other neural networks is given.The experimental results show that PNNs can be successfully applied to image segmentation for good results.
基金supported in part by the Key Research and Development Program of Guangdong Province(2021B0101200001)the Guangdong Basic and Applied Basic Research Foundation(2020B1515120071)。
文摘Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variations between the support and query images.Existing approaches utilize 4D convolutions to mine semantic correspondence between the support and query images.However,they still suffer from heavy computation,sparse correspondence,and large memory.We propose axial assembled correspondence network(AACNet)to alleviate these issues.The key point of AACNet is the proposed axial assembled 4D kernel,which constructs the basic block for semantic correspondence encoder(SCE).Furthermore,we propose the deblurring equations to provide more robust correspondence for the aforementioned SCE and design a novel fusion module to mix correspondences in a learnable manner.Experiments on PASCAL-5~i reveal that our AACNet achieves a mean intersection-over-union score of 65.9%for 1-shot segmentation and 70.6%for 5-shot segmentation,surpassing the state-of-the-art method by 5.8%and 5.0%respectively.
基金supported by National Natural Science Foundation of China(NSFC)(61976123,62072213)Taishan Young Scholars Program of Shandong Provinceand Key Development Program for Basic Research of Shandong Province(ZR2020ZD44).
文摘In intelligent perception and diagnosis of medical equipment,the visual and morphological changes in retinal vessels are closely related to the severity of cardiovascular diseases(e.g.,diabetes and hypertension).Intelligent auxiliary diagnosis of these diseases depends on the accuracy of the retinal vascular segmentation results.To address this challenge,we design a Dual-Branch-UNet framework,which comprises a Dual-Branch encoder structure for feature extraction based on the traditional U-Net model for medical image segmentation.To be more explicit,we utilize a novel parallel encoder made up of various convolutional modules to enhance the encoder portion of the original U-Net.Then,image features are combined at each layer to produce richer semantic data and the model’s capacity is adjusted to various input images.Meanwhile,in the lower sampling section,we give up pooling and conduct the lower sampling by convolution operation to control step size for information fusion.We also employ an attentionmodule in the decoder stage to filter the image noises so as to lessen the response of irrelevant features.Experiments are verified and compared on the DRIVE and ARIA datasets for retinal vessels segmentation.The proposed Dual-Branch-UNet has proved to be superior to other five typical state-of-the-art methods.
文摘Breast cancer positions as the most well-known threat and the main source of malignant growth-related morbidity and mortality throughout the world.It is apical of all new cancer incidences analyzed among females.Two features substantially inuence the classication accuracy of malignancy and benignity in automated cancer diagnostics.These are the precision of tumor segmentation and appropriateness of extracted attributes required for the diagnosis.In this research,the authors have proposed a ResU-Net(Residual U-Network)model for breast tumor segmentation.The proposed methodology renders augmented,and precise identication of tumor regions and produces accurate breast tumor segmentation in contrast-enhanced MR images.Furthermore,the proposed framework also encompasses the residual network technique,which subsequently enhances the performance and displays the improved training process.Over and above,the performance of ResU-Net has experimentally been analyzed with conventional U-Net,FCN8,FCN32.Algorithm performance is evaluated in the form of dice coefcient and MIoU(Mean Intersection of Union),accuracy,loss,sensitivity,specicity,F1score.Experimental results show that ResU-Net achieved validation accuracy&dice coefcient value of 73.22%&85.32%respectively on the Rider Breast MRI dataset and outperformed as compared to the other algorithms used in experimentation.
基金The authors received Sichuan Science and Technology Program(No.18YYJC1917)funding for this study.
文摘Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of the U-Net expansive path is to map low-resolution encoder feature maps to full input resolution feature maps.However,the consecutive deconvolution and convolutional operations in the expansive path lead to the loss of some high-level information.More high-level information can make the segmentationmore accurate.In this paper,we propose MU-Net,a novel,multi-path upsampling convolution network to retain more high-level information.The MU-Net mainly consists of three parts:contracting path,skip connection,and multi-expansive paths.The proposed MU-Net architecture is evaluated based on three different medical imaging datasets.Our experiments show that MU-Net improves the segmentation performance of U-Net-based methods on different datasets.At the same time,the computational efficiency is significantly improved by reducing the number of parameters by more than half.
基金supported by the National Natural Science Foundation of China(61233010 61305106)+2 种基金the Shanghai Natural Science Foundation(17ZR1409700 18ZR1415300)the basic research project of Shanghai Municipal Science and Technology Commission(16JC1400900)
文摘This paper concerns the problem of object segmentation in real-time for picking system. A region proposal method inspired by human glance based on the convolutional neural network is proposed to select promising regions, allowing more processing is reserved only for these regions. The speed of object segmentation is significantly improved by the region proposal method.By the combination of the region proposal method based on the convolutional neural network and superpixel method, the category and location information can be used to segment objects and image redundancy is significantly reduced. The processing time is reduced considerably by this to achieve the real time. Experiments show that the proposed method can segment the interested target object in real time on an ordinary laptop.
基金supported by the National Natural Science Foundation of China(61073106)the Aerospace Science and Technology Innovation Fund(CASC201105)
文摘An image segmentation algorithm of the restrained fuzzy Kohonen clustering network (RFKCN) based on high- dimension fuzzy character is proposed. The algorithm includes two steps. The first step is the fuzzification of pixels in which two redundant images are built by fuzzy mean value and fuzzy median value. The second step is to construct a three-dimensional (3-D) feature vector of redundant images and their original images and cluster the feature vector through RFKCN, to realize image seg- mentation. The proposed algorithm fully takes into account not only gray distribution information of pixels, but also relevant information and fuzzy information among neighboring pixels in constructing 3- D character space. Based on the combination of competitiveness, redundancy and complementary of the information, the proposed algorithm improves the accuracy of clustering. Theoretical anal- yses and experimental results demonstrate that the proposed algorithm has a good segmentation performance.