Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the s...Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.展开更多
Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a chall...Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a challenge for accurate segmentation.In this paper,we propose a 3D semantic segmentation network for neuronal soma segmentation to address this issue.Using an encoding-decoding structure,we introduce a Multi-Scale feature extraction and Adaptive Weighting fusion module(MSAW)after each encoding block.The MSAW module can not only emphasize the fine structures via an upsampling strategy,but also provide pixel-wise weights to measure the importance of the multi-scale features.Additionally,a dynamic convolution instead of normal convolution is employed to better adapt the network to input data with different distributions.The proposed MSAW-based semantic segmentation network(MSAW-Net)was evaluated on three neuronal soma images from mouse brain and one neuronal soma image from macaque brain,demonstrating the efficiency of the proposed method.It achieved an F1 score of 91.8%on Fezf2-2A-CreER dataset,97.1%on LSL-H2B-GFP dataset,82.8%on Thy1-EGFP-Mline dataset,and 86.9%on macaque dataset,achieving improvements over the 3D U-Net model by 3.1%,3.3%,3.9%,and 2.3%,respectively.展开更多
Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training per...Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training performance, the ability to effectively exploit the dataset, and the ability to adapt to complex environments when deploying the model. By utilizing the knowledge distillation techniques, the article strives to overcome the above challenges with the inheritance of the advantages of both the teacher model and the student model. More precisely, the ResNet152-PSP-Net model’s characteristics are utilized to train the ResNet18-PSP-Net model. Pyramid pooling blocks are utilized to decode multi-scale feature maps, creating a complete semantic map inference. The student model not only preserves the strong segmentation performance from the teacher model but also improves the inference speed of the prediction results. The proposed method exhibits a clear advantage over conventional convolutional neural network (CNN) models, as evident from the conducted experiments. Furthermore, the proposed model also shows remarkable improvement in processing speed when compared with light-weight models such as MobileNetV2 and EfficientNet based on latency and throughput parameters. The proposed KD-SegNet model obtains an accuracy of 96.3% and a mIoU (mean Intersection over Union) of 77%, outperforming the performance of existing models by more than 15% on the same training dataset. The suggested method has an average training time that is only 0.51 times less than same field models, while still achieving comparable segmentation performance. Hence, the semantic segmentation frames are collected, forming the motion trajectory for the system in the environment. Overall, this architecture shows great promise for the development of knowledge-based systems for MR’s navigation.展开更多
In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and...In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.展开更多
Image semantic segmentation is an essential technique for studying human behavior through image data.This paper proposes an image semantic segmentation method for human behavior research.Firstly,an end-to-end convolut...Image semantic segmentation is an essential technique for studying human behavior through image data.This paper proposes an image semantic segmentation method for human behavior research.Firstly,an end-to-end convolutional neural network architecture is proposed,which consists of a depth-separable jump-connected fully convolutional network and a conditional random field network;then jump-connected convolution is used to classify each pixel in the image,and an image semantic segmentation method based on convolu-tional neural network is proposed;and then a conditional random field network is used to improve the effect of image segmentation of hu-man behavior and a linear modeling and nonlinear modeling method based on the semantic segmentation of conditional random field im-age is proposed.Finally,using the proposed image segmentation network,the input entrepreneurial image data is semantically segmented to obtain the contour features of the person;and the segmentation of the images in the medical field.The experimental results show that the image semantic segmentation method is effective.It is a new way to use image data to study human behavior and can be extended to other research areas.展开更多
Aiming at the problem that the existing models have a poor segmentation effect on imbalanced data sets with small-scale samples,a bilateral U-Net network model with a spatial attention mechanism is designed.The model ...Aiming at the problem that the existing models have a poor segmentation effect on imbalanced data sets with small-scale samples,a bilateral U-Net network model with a spatial attention mechanism is designed.The model uses the lightweight MobileNetV2 as the backbone network for feature hierarchical extraction and proposes an Attentive Pyramid Spatial Attention(APSA)module compared to the Attenuated Spatial Pyramid module,which can increase the receptive field and enhance the information,and finally adds the context fusion prediction branch that fuses high-semantic and low-semantic prediction results,and the model effectively improves the segmentation accuracy of small data sets.The experimental results on the CamVid data set show that compared with some existing semantic segmentation networks,the algorithm has a better segmentation effect and segmentation accuracy,and its mIOU reaches 75.85%.Moreover,to verify the generality of the model and the effectiveness of the APSA module,experiments were conducted on the VOC 2012 data set,and the APSA module improved mIOU by about 12.2%.展开更多
Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variation...Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variations between the support and query images.Existing approaches utilize 4D convolutions to mine semantic correspondence between the support and query images.However,they still suffer from heavy computation,sparse correspondence,and large memory.We propose axial assembled correspondence network(AACNet)to alleviate these issues.The key point of AACNet is the proposed axial assembled 4D kernel,which constructs the basic block for semantic correspondence encoder(SCE).Furthermore,we propose the deblurring equations to provide more robust correspondence for the aforementioned SCE and design a novel fusion module to mix correspondences in a learnable manner.Experiments on PASCAL-5~i reveal that our AACNet achieves a mean intersection-over-union score of 65.9%for 1-shot segmentation and 70.6%for 5-shot segmentation,surpassing the state-of-the-art method by 5.8%and 5.0%respectively.展开更多
The application of unmanned driving in the Internet of Things is one of the concrete manifestations of the application of artificial intelligence technology.Image semantic segmentation can help the unmanned driving sy...The application of unmanned driving in the Internet of Things is one of the concrete manifestations of the application of artificial intelligence technology.Image semantic segmentation can help the unmanned driving system by achieving road accessibility analysis.Semantic segmentation is also a challenging technology for image understanding and scene parsing.We focused on the challenging task of real-time semantic segmentation in this paper.In this paper,we proposed a novel fast architecture for real-time semantic segmentation named DuFNet.Starting from the existing work of Bilateral Segmentation Network(BiSeNet),DuFNet proposes a novel Semantic Information Flow(SIF)structure for context information and a novel Fringe Information Flow(FIF)structure for spatial information.We also proposed two kinds of SIF with cascaded and paralleled structures,respectively.The SIF encodes the input stage by stage in the ResNet18 backbone and provides context information for the feature fusionmodule.Features from previous stages usually contain rich low-level details but high-level semantics for later stages.Themultiple convolutions embed in Parallel SIF aggregate the corresponding features among different stages and generate a powerful global context representation with less computational cost.The FIF consists of a pooling layer and an upsampling operator followed by projection convolution layer.The concise component provides more spatial details for the network.Compared with BiSeNet,our work achieved faster speed and comparable performance with 72.34%mIoU accuracy and 78 FPS on Cityscapes Dataset based on the ResNet18 backbone.展开更多
Early detection of the Covid-19 disease is essential due to its higher rate of infection affecting tens of millions of people,and its high number of deaths also by 7%.For that purpose,a proposed model of several stage...Early detection of the Covid-19 disease is essential due to its higher rate of infection affecting tens of millions of people,and its high number of deaths also by 7%.For that purpose,a proposed model of several stages was developed.The first stage is optimizing the images using dynamic adaptive histogram equalization,performing a semantic segmentation using DeepLabv3Plus,then augmenting the data by flipping it horizontally,rotating it,then flipping it vertically.The second stage builds a custom convolutional neural network model using several pre-trained ImageNet.Finally,the model compares the pre-trained data to the new output,while repeatedly trimming the best-performing models to reduce complexity and improve memory efficiency.Several experiments were done using different techniques and parameters.Accordingly,the proposed model achieved an average accuracy of 99.6%and an area under the curve of 0.996 in the Covid-19 detection.This paper will discuss how to train a customized intelligent convolutional neural network using various parameters on a set of chest X-rays with an accuracy of 99.6%.展开更多
In recent years,the Internet of Things(IoT)has gradually developed applications such as collecting sensory data and building intelligent services,which has led to an explosion in mobile data traffic.Meanwhile,with the...In recent years,the Internet of Things(IoT)has gradually developed applications such as collecting sensory data and building intelligent services,which has led to an explosion in mobile data traffic.Meanwhile,with the rapid development of artificial intelligence,semantic communication has attracted great attention as a new communication paradigm.However,for IoT devices,however,processing image information efficiently in real time is an essential task for the rapid transmission of semantic information.With the increase of model parameters in deep learning methods,the model inference time in sensor devices continues to increase.In contrast,the Pulse Coupled Neural Network(PCNN)has fewer parameters,making it more suitable for processing real-time scene tasks such as image segmentation,which lays the foundation for real-time,effective,and accurate image transmission.However,the parameters of PCNN are determined by trial and error,which limits its application.To overcome this limitation,an Improved Pulse Coupled Neural Networks(IPCNN)model is proposed in this work.The IPCNN constructs the connection between the static properties of the input image and the dynamic properties of the neurons,and all its parameters are set adaptively,which avoids the inconvenience of manual setting in traditional methods and improves the adaptability of parameters to different types of images.Experimental segmentation results demonstrate the validity and efficiency of the proposed self-adaptive parameter setting method of IPCNN on the gray images and natural images from the Matlab and Berkeley Segmentation Datasets.The IPCNN method achieves a better segmentation result without training,providing a new solution for the real-time transmission of image semantic information.展开更多
The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning technique...The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning techniques bring encouraging performance to fine-grained image classification which aims to distinguish subordinate-level categories, such as bird species or dog breeds. This task is extremely challenging due to high intra-class and low inter-class variance. In this paper, we review four types of deep learning based fine-grained image classification approaches, including the general convolutional neural networks (CNNs), part detection based, ensemble of networks based and visual attention based fine-grained image classification approaches. Besides, the deep learning based semantic segmentation approaches are also covered in this paper. The region proposal based and fully convolutional networks based approaches for semantic segmentation are introduced respectively.展开更多
Unmanned surface vehicle(USV)is currently a hot research topic in maritime communication network(MCN),where denoising and semantic segmentation of maritime images taken by USV have been rarely studied.The former has r...Unmanned surface vehicle(USV)is currently a hot research topic in maritime communication network(MCN),where denoising and semantic segmentation of maritime images taken by USV have been rarely studied.The former has recently researched on autoencoder model used for image denoising,but the existed models are too complicated to be suitable for real-time detection of USV.In this paper,we proposed a lightweight autoencoder combined with inception module for maritime image denoising in different noisy environments and explore the effect of different inception modules on the denoising performance.Furthermore,we completed the semantic segmentation task for maritime images taken by USV utilizing the pretrained U-Net model with tuning,and compared them with original U-Net model based on different backbone.Subsequently,we compared the semantic segmentation of noised and denoised maritime images respectively to explore the effect of image noise on semantic segmentation performance.Case studies are provided to prove the feasibility of our proposed denoising and segmentation method.Finally,a simple integrated communication system combining image denoising and segmentation for USV is shown.展开更多
Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively u...Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation.展开更多
Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of...Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of the U-Net expansive path is to map low-resolution encoder feature maps to full input resolution feature maps.However,the consecutive deconvolution and convolutional operations in the expansive path lead to the loss of some high-level information.More high-level information can make the segmentationmore accurate.In this paper,we propose MU-Net,a novel,multi-path upsampling convolution network to retain more high-level information.The MU-Net mainly consists of three parts:contracting path,skip connection,and multi-expansive paths.The proposed MU-Net architecture is evaluated based on three different medical imaging datasets.Our experiments show that MU-Net improves the segmentation performance of U-Net-based methods on different datasets.At the same time,the computational efficiency is significantly improved by reducing the number of parameters by more than half.展开更多
In the study of the composite materials performance,X-ray computed tomography(XCT)scanning has always been one of the important measures to detect the internal structures.CT image segmentation technology will effectiv...In the study of the composite materials performance,X-ray computed tomography(XCT)scanning has always been one of the important measures to detect the internal structures.CT image segmentation technology will effectively improve the accuracy of the subsequent material feature extraction process,which is of great significance to the study of material performance.This study focuses on the low accuracy problem of image segmentation caused by fiber cross-section adhesion in composite CT images.In the core layer area,area validity is evaluated by morphological indicator and an iterative segmentation strategy is proposed based on the watershed algorithm.In the transition layer area,a U-net neural network model trained by using artificial labels is applied to the prediction of segmentation result.Furthermore,a CT image segmentation method for fiber composite materials based on the improved watershed algorithm and the U-net model is proposed.It is verified by experiments that the method has good adaptability and effectiveness to the CT image segmentation problem of composite materials,and the accuracy of segmentation is significantly improved in comparison with the original method,which ensures the accuracy and robustness of the subsequent fiber feature extraction process.展开更多
In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggreg...In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggregation U-shaped attention network structure of MAAUNet(MultiRes aggregation attention UNet)is proposed based on MultiResUNet.Firstly,aggregate connection is introduced from the original feature aggregation at the same level.Skip connection is redesigned to aggregate features of different semantic scales at the decoder subnet,and the problem of semantic gaps is further solved that may exist between skip connections.Secondly,after the multi-scale convolution module,a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space to adaptively optimize the intermediate feature map.Finally,the original convolution block is improved.The convolution channels are expanded with a series convolution structure to complement each other and extract richer spatial features.Residual connections are retained and the convolution block is turned into a multi-channel convolution block.The model is made to extract multi-scale spatial features.The experimental results show that MAAUNet has strong competitiveness in challenging datasets,and shows good segmentation performance and stability in dealing with multi-scale input and noise interference.展开更多
Currently,deep convolutional neural networks have made great progress in the field of semantic segmentation.Because of the fixed convolution kernel geometry,standard convolution neural networks have been limited the a...Currently,deep convolutional neural networks have made great progress in the field of semantic segmentation.Because of the fixed convolution kernel geometry,standard convolution neural networks have been limited the ability to simulate geometric transformations.Therefore,a deformable convolution is introduced to enhance the adaptability of convolutional networks to spatial transformation.Considering that the deep convolutional neural networks cannot adequately segment the local objects at the output layer due to using the pooling layers in neural network architecture.To overcome this shortcoming,the rough prediction segmentation results of the neural network output layer will be processed by fully connected conditional random fields to improve the ability of image segmentation.The proposed method can easily be trained by end-to-end using standard backpropagation algorithms.Finally,the proposed method is tested on the ISPRS dataset.The results show that the proposed method can effectively overcome the influence of the complex structure of the segmentation object and obtain state-of-the-art accuracy on the ISPRS Vaihingen 2D semantic labeling dataset.展开更多
Recently, semantic segmentation has been widely applied toimage processing, scene understanding, and many others. Especially, indeep learning-based semantic segmentation, the U-Net with convolutionalencoder-decoder ar...Recently, semantic segmentation has been widely applied toimage processing, scene understanding, and many others. Especially, indeep learning-based semantic segmentation, the U-Net with convolutionalencoder-decoder architecture is a representative model which is proposed forimage segmentation in the biomedical field. It used max pooling operationfor reducing the size of image and making noise robust. However, instead ofreducing the complexity of the model, max pooling has the disadvantageof omitting some information about the image in reducing it. So, thispaper used two diagonal elements of down-sampling operation instead ofit. We think that the down-sampling feature maps have more informationintrinsically than max pooling feature maps because of keeping the Nyquisttheorem and extracting the latent information from them. In addition,this paper used two other diagonal elements for the skip connection. Indecoding, we used Subpixel Convolution rather than transposed convolutionto efficiently decode the encoded feature maps. Including all the ideas, thispaper proposed the new encoder-decoder model called Down-Sampling andSubpixel Convolution U-Net (DSSC-UNet). To prove the better performanceof the proposed model, this paper measured the performance of the UNetand DSSC-UNet on the Cityscapes. As a result, DSSC-UNet achieved89.6% Mean Intersection OverUnion (Mean-IoU) andU-Net achieved 85.6%Mean-IoU, confirming that DSSC-UNet achieved better performance.展开更多
The semantic segmentation of a bird’s-eye view(BEV)is crucial for environment perception in autonomous driving,which includes the static elements of the scene,such as drivable areas,and dynamic elements such as cars....The semantic segmentation of a bird’s-eye view(BEV)is crucial for environment perception in autonomous driving,which includes the static elements of the scene,such as drivable areas,and dynamic elements such as cars.This paper proposes an end-to-end deep learning architecture based on 3D convolution to predict the semantic segmentation of a BEV,as well as voxel semantic segmentation,from monocular images.The voxelization of scenes and feature transformation from the perspective space to camera space are the key approaches of this model to boost the prediction accuracy.The effectiveness of the proposed method was demonstrated by training and evaluating the model on the NuScenes dataset.A comparison with other state-of-the-art methods showed that the proposed approach outperformed other approaches in the semantic segmentation of a BEV.It also implements voxel semantic segmentation,which cannot be achieved by the state-of-the-art methods.展开更多
Semantic segmentation is for pixel-level classification tasks,and contextual information has an important impact on the performance of segmentation.In order to capture richer contextual information,we adopt ResNet as ...Semantic segmentation is for pixel-level classification tasks,and contextual information has an important impact on the performance of segmentation.In order to capture richer contextual information,we adopt ResNet as the backbone network and designs an encoder-decoder architecture based on multidimensional attention(MDA)module and multiscale upsampling(MSU)module.The MDA module calculates the attention matrices of the three dimensions to capture the dependency of each position,and adaptively captures the image features.The MSU module adopts parallel branches to capture the multiscale features of the images,and multiscale feature aggregation can enhance contextual information.A series of experiments demonstrate the validity of the model on Cityscapes and Camvid datasets.展开更多
文摘Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset.
基金supported by the STI2030-Major-Projects(No.2021ZD0200104)the National Natural Science Foundations of China under Grant 61771437.
文摘Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a challenge for accurate segmentation.In this paper,we propose a 3D semantic segmentation network for neuronal soma segmentation to address this issue.Using an encoding-decoding structure,we introduce a Multi-Scale feature extraction and Adaptive Weighting fusion module(MSAW)after each encoding block.The MSAW module can not only emphasize the fine structures via an upsampling strategy,but also provide pixel-wise weights to measure the importance of the multi-scale features.Additionally,a dynamic convolution instead of normal convolution is employed to better adapt the network to input data with different distributions.The proposed MSAW-based semantic segmentation network(MSAW-Net)was evaluated on three neuronal soma images from mouse brain and one neuronal soma image from macaque brain,demonstrating the efficiency of the proposed method.It achieved an F1 score of 91.8%on Fezf2-2A-CreER dataset,97.1%on LSL-H2B-GFP dataset,82.8%on Thy1-EGFP-Mline dataset,and 86.9%on macaque dataset,achieving improvements over the 3D U-Net model by 3.1%,3.3%,3.9%,and 2.3%,respectively.
基金funded by Hanoi University of Science and Technology(HUST)under project number T2023-PC-008.
文摘Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training performance, the ability to effectively exploit the dataset, and the ability to adapt to complex environments when deploying the model. By utilizing the knowledge distillation techniques, the article strives to overcome the above challenges with the inheritance of the advantages of both the teacher model and the student model. More precisely, the ResNet152-PSP-Net model’s characteristics are utilized to train the ResNet18-PSP-Net model. Pyramid pooling blocks are utilized to decode multi-scale feature maps, creating a complete semantic map inference. The student model not only preserves the strong segmentation performance from the teacher model but also improves the inference speed of the prediction results. The proposed method exhibits a clear advantage over conventional convolutional neural network (CNN) models, as evident from the conducted experiments. Furthermore, the proposed model also shows remarkable improvement in processing speed when compared with light-weight models such as MobileNetV2 and EfficientNet based on latency and throughput parameters. The proposed KD-SegNet model obtains an accuracy of 96.3% and a mIoU (mean Intersection over Union) of 77%, outperforming the performance of existing models by more than 15% on the same training dataset. The suggested method has an average training time that is only 0.51 times less than same field models, while still achieving comparable segmentation performance. Hence, the semantic segmentation frames are collected, forming the motion trajectory for the system in the environment. Overall, this architecture shows great promise for the development of knowledge-based systems for MR’s navigation.
基金supported in part by the National Natural Science Foundation of China under Grant Nos.U20A20197,62306187the Foundation of Ministry of Industry and Information Technology TC220H05X-04.
文摘In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation.
基金Supported by the Major Consulting and Research Project of the Chinese Academy of Engineering(2020-CQ-ZD-1)the National Natural Science Foundation of China(72101235)Zhejiang Soft Science Research Program(2023C35012)。
文摘Image semantic segmentation is an essential technique for studying human behavior through image data.This paper proposes an image semantic segmentation method for human behavior research.Firstly,an end-to-end convolutional neural network architecture is proposed,which consists of a depth-separable jump-connected fully convolutional network and a conditional random field network;then jump-connected convolution is used to classify each pixel in the image,and an image semantic segmentation method based on convolu-tional neural network is proposed;and then a conditional random field network is used to improve the effect of image segmentation of hu-man behavior and a linear modeling and nonlinear modeling method based on the semantic segmentation of conditional random field im-age is proposed.Finally,using the proposed image segmentation network,the input entrepreneurial image data is semantically segmented to obtain the contour features of the person;and the segmentation of the images in the medical field.The experimental results show that the image semantic segmentation method is effective.It is a new way to use image data to study human behavior and can be extended to other research areas.
基金Ministry of Science and Technology Basic Resources Survey Special Project,Grant/Award Number:2019FY100900High-level Hospital Construction Project,Grant/Award Number:DFJH2019015+2 种基金National Natural Science Foundation of China,Grant/Award Number:61871021Guangdong Natural Science Foundation,Grant/Award Number:2019A1515011676Beijing Key Laboratory of Robotics Bionic and Functional Research。
文摘Aiming at the problem that the existing models have a poor segmentation effect on imbalanced data sets with small-scale samples,a bilateral U-Net network model with a spatial attention mechanism is designed.The model uses the lightweight MobileNetV2 as the backbone network for feature hierarchical extraction and proposes an Attentive Pyramid Spatial Attention(APSA)module compared to the Attenuated Spatial Pyramid module,which can increase the receptive field and enhance the information,and finally adds the context fusion prediction branch that fuses high-semantic and low-semantic prediction results,and the model effectively improves the segmentation accuracy of small data sets.The experimental results on the CamVid data set show that compared with some existing semantic segmentation networks,the algorithm has a better segmentation effect and segmentation accuracy,and its mIOU reaches 75.85%.Moreover,to verify the generality of the model and the effectiveness of the APSA module,experiments were conducted on the VOC 2012 data set,and the APSA module improved mIOU by about 12.2%.
基金supported in part by the Key Research and Development Program of Guangdong Province(2021B0101200001)the Guangdong Basic and Applied Basic Research Foundation(2020B1515120071)。
文摘Few-shot semantic segmentation aims at training a model that can segment novel classes in a query image with only a few densely annotated support exemplars.It remains a challenge because of large intra-class variations between the support and query images.Existing approaches utilize 4D convolutions to mine semantic correspondence between the support and query images.However,they still suffer from heavy computation,sparse correspondence,and large memory.We propose axial assembled correspondence network(AACNet)to alleviate these issues.The key point of AACNet is the proposed axial assembled 4D kernel,which constructs the basic block for semantic correspondence encoder(SCE).Furthermore,we propose the deblurring equations to provide more robust correspondence for the aforementioned SCE and design a novel fusion module to mix correspondences in a learnable manner.Experiments on PASCAL-5~i reveal that our AACNet achieves a mean intersection-over-union score of 65.9%for 1-shot segmentation and 70.6%for 5-shot segmentation,surpassing the state-of-the-art method by 5.8%and 5.0%respectively.
基金supported in part by the National Key RD Program of China (2021YFF0602104-2,2020YFB1804604)in part by the 2020 Industrial Internet Innovation and Development Project from Ministry of Industry and Information Technology of Chinain part by the Fundamental Research Fund for the Central Universities (30918012204,30920041112).
文摘The application of unmanned driving in the Internet of Things is one of the concrete manifestations of the application of artificial intelligence technology.Image semantic segmentation can help the unmanned driving system by achieving road accessibility analysis.Semantic segmentation is also a challenging technology for image understanding and scene parsing.We focused on the challenging task of real-time semantic segmentation in this paper.In this paper,we proposed a novel fast architecture for real-time semantic segmentation named DuFNet.Starting from the existing work of Bilateral Segmentation Network(BiSeNet),DuFNet proposes a novel Semantic Information Flow(SIF)structure for context information and a novel Fringe Information Flow(FIF)structure for spatial information.We also proposed two kinds of SIF with cascaded and paralleled structures,respectively.The SIF encodes the input stage by stage in the ResNet18 backbone and provides context information for the feature fusionmodule.Features from previous stages usually contain rich low-level details but high-level semantics for later stages.Themultiple convolutions embed in Parallel SIF aggregate the corresponding features among different stages and generate a powerful global context representation with less computational cost.The FIF consists of a pooling layer and an upsampling operator followed by projection convolution layer.The concise component provides more spatial details for the network.Compared with BiSeNet,our work achieved faster speed and comparable performance with 72.34%mIoU accuracy and 78 FPS on Cityscapes Dataset based on the ResNet18 backbone.
基金This work was supported by the National Research Foundation of Korea-Grant funded by the Korean Government(Ministry of Science and ICT)-NRF-2020R1A2B5B02002478).There was no additional external funding received for this study.
文摘Early detection of the Covid-19 disease is essential due to its higher rate of infection affecting tens of millions of people,and its high number of deaths also by 7%.For that purpose,a proposed model of several stages was developed.The first stage is optimizing the images using dynamic adaptive histogram equalization,performing a semantic segmentation using DeepLabv3Plus,then augmenting the data by flipping it horizontally,rotating it,then flipping it vertically.The second stage builds a custom convolutional neural network model using several pre-trained ImageNet.Finally,the model compares the pre-trained data to the new output,while repeatedly trimming the best-performing models to reduce complexity and improve memory efficiency.Several experiments were done using different techniques and parameters.Accordingly,the proposed model achieved an average accuracy of 99.6%and an area under the curve of 0.996 in the Covid-19 detection.This paper will discuss how to train a customized intelligent convolutional neural network using various parameters on a set of chest X-rays with an accuracy of 99.6%.
基金supported in part by the National Key Research and Development Program of China(Grant No.2019YFA0706200).
文摘In recent years,the Internet of Things(IoT)has gradually developed applications such as collecting sensory data and building intelligent services,which has led to an explosion in mobile data traffic.Meanwhile,with the rapid development of artificial intelligence,semantic communication has attracted great attention as a new communication paradigm.However,for IoT devices,however,processing image information efficiently in real time is an essential task for the rapid transmission of semantic information.With the increase of model parameters in deep learning methods,the model inference time in sensor devices continues to increase.In contrast,the Pulse Coupled Neural Network(PCNN)has fewer parameters,making it more suitable for processing real-time scene tasks such as image segmentation,which lays the foundation for real-time,effective,and accurate image transmission.However,the parameters of PCNN are determined by trial and error,which limits its application.To overcome this limitation,an Improved Pulse Coupled Neural Networks(IPCNN)model is proposed in this work.The IPCNN constructs the connection between the static properties of the input image and the dynamic properties of the neurons,and all its parameters are set adaptively,which avoids the inconvenience of manual setting in traditional methods and improves the adaptability of parameters to different types of images.Experimental segmentation results demonstrate the validity and efficiency of the proposed self-adaptive parameter setting method of IPCNN on the gray images and natural images from the Matlab and Berkeley Segmentation Datasets.The IPCNN method achieves a better segmentation result without training,providing a new solution for the real-time transmission of image semantic information.
基金supported by the National Natural Science Foundation of China(Nos.61373121 and 61328205)Program for Sichuan Provincial Science Fund for Distinguished Young Scholars(No.13QNJJ0149)+1 种基金the Fundamental Research Funds for the Central UniversitiesChina Scholarship Council(No.201507000032)
文摘The deep learning technology has shown impressive performance in various vision tasks such as image classification, object detection and semantic segmentation. In particular, recent advances of deep learning techniques bring encouraging performance to fine-grained image classification which aims to distinguish subordinate-level categories, such as bird species or dog breeds. This task is extremely challenging due to high intra-class and low inter-class variance. In this paper, we review four types of deep learning based fine-grained image classification approaches, including the general convolutional neural networks (CNNs), part detection based, ensemble of networks based and visual attention based fine-grained image classification approaches. Besides, the deep learning based semantic segmentation approaches are also covered in this paper. The region proposal based and fully convolutional networks based approaches for semantic segmentation are introduced respectively.
基金Natural Science Foundation of Fujian Province(No.2019J05026)in part by the Education Scientific Research Project for Young Teachers of Fujian Province(No.JT180053).
文摘Unmanned surface vehicle(USV)is currently a hot research topic in maritime communication network(MCN),where denoising and semantic segmentation of maritime images taken by USV have been rarely studied.The former has recently researched on autoencoder model used for image denoising,but the existed models are too complicated to be suitable for real-time detection of USV.In this paper,we proposed a lightweight autoencoder combined with inception module for maritime image denoising in different noisy environments and explore the effect of different inception modules on the denoising performance.Furthermore,we completed the semantic segmentation task for maritime images taken by USV utilizing the pretrained U-Net model with tuning,and compared them with original U-Net model based on different backbone.Subsequently,we compared the semantic segmentation of noised and denoised maritime images respectively to explore the effect of image noise on semantic segmentation performance.Case studies are provided to prove the feasibility of our proposed denoising and segmentation method.Finally,a simple integrated communication system combining image denoising and segmentation for USV is shown.
基金National Key R&D Program of China(No.2022ZD0118401).
文摘Deep learning based methods have been successfully applied to semantic segmentation of optical remote sensing images.However,as more and more remote sensing data is available,it is a new challenge to comprehensively utilize multi-modal remote sensing data to break through the performance bottleneck of single-modal interpretation.In addition,semantic segmentation and height estimation in remote sensing data are two tasks with strong correlation,but existing methods usually study individual tasks separately,which leads to high computational resource overhead.To this end,we propose a Multi-Task learning framework for Multi-Modal remote sensing images(MM_MT).Specifically,we design a Cross-Modal Feature Fusion(CMFF)method,which aggregates complementary information of different modalities to improve the accuracy of semantic segmentation and height estimation.Besides,a dual-stream multi-task learning method is introduced for Joint Semantic Segmentation and Height Estimation(JSSHE),extracting common features in a shared network to save time and resources,and then learning task-specific features in two task branches.Experimental results on the public multi-modal remote sensing image dataset Potsdam show that compared to training two tasks independently,multi-task learning saves 20%of training time and achieves competitive performance with mIoU of 83.02%for semantic segmentation and accuracy of 95.26%for height estimation.
基金The authors received Sichuan Science and Technology Program(No.18YYJC1917)funding for this study.
文摘Medical image segmentation plays an important role in clinical diagnosis,quantitative analysis,and treatment process.Since 2015,U-Net-based approaches have been widely used formedical image segmentation.The purpose of the U-Net expansive path is to map low-resolution encoder feature maps to full input resolution feature maps.However,the consecutive deconvolution and convolutional operations in the expansive path lead to the loss of some high-level information.More high-level information can make the segmentationmore accurate.In this paper,we propose MU-Net,a novel,multi-path upsampling convolution network to retain more high-level information.The MU-Net mainly consists of three parts:contracting path,skip connection,and multi-expansive paths.The proposed MU-Net architecture is evaluated based on three different medical imaging datasets.Our experiments show that MU-Net improves the segmentation performance of U-Net-based methods on different datasets.At the same time,the computational efficiency is significantly improved by reducing the number of parameters by more than half.
文摘In the study of the composite materials performance,X-ray computed tomography(XCT)scanning has always been one of the important measures to detect the internal structures.CT image segmentation technology will effectively improve the accuracy of the subsequent material feature extraction process,which is of great significance to the study of material performance.This study focuses on the low accuracy problem of image segmentation caused by fiber cross-section adhesion in composite CT images.In the core layer area,area validity is evaluated by morphological indicator and an iterative segmentation strategy is proposed based on the watershed algorithm.In the transition layer area,a U-net neural network model trained by using artificial labels is applied to the prediction of segmentation result.Furthermore,a CT image segmentation method for fiber composite materials based on the improved watershed algorithm and the U-net model is proposed.It is verified by experiments that the method has good adaptability and effectiveness to the CT image segmentation problem of composite materials,and the accuracy of segmentation is significantly improved in comparison with the original method,which ensures the accuracy and robustness of the subsequent fiber feature extraction process.
基金National Natural Science Foundation of China(No.61806006)Jiangsu University Superior Discipline Construction Project。
文摘In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggregation U-shaped attention network structure of MAAUNet(MultiRes aggregation attention UNet)is proposed based on MultiResUNet.Firstly,aggregate connection is introduced from the original feature aggregation at the same level.Skip connection is redesigned to aggregate features of different semantic scales at the decoder subnet,and the problem of semantic gaps is further solved that may exist between skip connections.Secondly,after the multi-scale convolution module,a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space to adaptively optimize the intermediate feature map.Finally,the original convolution block is improved.The convolution channels are expanded with a series convolution structure to complement each other and extract richer spatial features.Residual connections are retained and the convolution block is turned into a multi-channel convolution block.The model is made to extract multi-scale spatial features.The experimental results show that MAAUNet has strong competitiveness in challenging datasets,and shows good segmentation performance and stability in dealing with multi-scale input and noise interference.
基金National Key Research and Development Program of China(No.2017YFC0405806)。
文摘Currently,deep convolutional neural networks have made great progress in the field of semantic segmentation.Because of the fixed convolution kernel geometry,standard convolution neural networks have been limited the ability to simulate geometric transformations.Therefore,a deformable convolution is introduced to enhance the adaptability of convolutional networks to spatial transformation.Considering that the deep convolutional neural networks cannot adequately segment the local objects at the output layer due to using the pooling layers in neural network architecture.To overcome this shortcoming,the rough prediction segmentation results of the neural network output layer will be processed by fully connected conditional random fields to improve the ability of image segmentation.The proposed method can easily be trained by end-to-end using standard backpropagation algorithms.Finally,the proposed method is tested on the ISPRS dataset.The results show that the proposed method can effectively overcome the influence of the complex structure of the segmentation object and obtain state-of-the-art accuracy on the ISPRS Vaihingen 2D semantic labeling dataset.
文摘Recently, semantic segmentation has been widely applied toimage processing, scene understanding, and many others. Especially, indeep learning-based semantic segmentation, the U-Net with convolutionalencoder-decoder architecture is a representative model which is proposed forimage segmentation in the biomedical field. It used max pooling operationfor reducing the size of image and making noise robust. However, instead ofreducing the complexity of the model, max pooling has the disadvantageof omitting some information about the image in reducing it. So, thispaper used two diagonal elements of down-sampling operation instead ofit. We think that the down-sampling feature maps have more informationintrinsically than max pooling feature maps because of keeping the Nyquisttheorem and extracting the latent information from them. In addition,this paper used two other diagonal elements for the skip connection. Indecoding, we used Subpixel Convolution rather than transposed convolutionto efficiently decode the encoded feature maps. Including all the ideas, thispaper proposed the new encoder-decoder model called Down-Sampling andSubpixel Convolution U-Net (DSSC-UNet). To prove the better performanceof the proposed model, this paper measured the performance of the UNetand DSSC-UNet on the Cityscapes. As a result, DSSC-UNet achieved89.6% Mean Intersection OverUnion (Mean-IoU) andU-Net achieved 85.6%Mean-IoU, confirming that DSSC-UNet achieved better performance.
基金the National Natural Science Founda-tion of China(No.52072243)the Sichuan Science and Technology Program(No.2020YFSY0058)。
文摘The semantic segmentation of a bird’s-eye view(BEV)is crucial for environment perception in autonomous driving,which includes the static elements of the scene,such as drivable areas,and dynamic elements such as cars.This paper proposes an end-to-end deep learning architecture based on 3D convolution to predict the semantic segmentation of a BEV,as well as voxel semantic segmentation,from monocular images.The voxelization of scenes and feature transformation from the perspective space to camera space are the key approaches of this model to boost the prediction accuracy.The effectiveness of the proposed method was demonstrated by training and evaluating the model on the NuScenes dataset.A comparison with other state-of-the-art methods showed that the proposed approach outperformed other approaches in the semantic segmentation of a BEV.It also implements voxel semantic segmentation,which cannot be achieved by the state-of-the-art methods.
基金Fundamental Research Fund in Heilongjiang Provincial Universities(Nos.135409602,135409102)。
文摘Semantic segmentation is for pixel-level classification tasks,and contextual information has an important impact on the performance of segmentation.In order to capture richer contextual information,we adopt ResNet as the backbone network and designs an encoder-decoder architecture based on multidimensional attention(MDA)module and multiscale upsampling(MSU)module.The MDA module calculates the attention matrices of the three dimensions to capture the dependency of each position,and adaptively captures the image features.The MSU module adopts parallel branches to capture the multiscale features of the images,and multiscale feature aggregation can enhance contextual information.A series of experiments demonstrate the validity of the model on Cityscapes and Camvid datasets.