期刊文献+
共找到4,120篇文章
< 1 2 206 >
每页显示 20 50 100
A 3D semantic segmentation network for accurate neuronal soma segmentation
1
作者 Li Ma Qi Zhong +2 位作者 Yezi Wang Xiaoquan Yang Qian Du 《Journal of Innovative Optical Health Sciences》 2025年第1期67-83,共17页
Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a chall... Neuronal soma segmentation plays a crucial role in neuroscience applications.However,the fine structure,such as boundaries,small-volume neuronal somata and fibers,are commonly present in cell images,which pose a challenge for accurate segmentation.In this paper,we propose a 3D semantic segmentation network for neuronal soma segmentation to address this issue.Using an encoding-decoding structure,we introduce a Multi-Scale feature extraction and Adaptive Weighting fusion module(MSAW)after each encoding block.The MSAW module can not only emphasize the fine structures via an upsampling strategy,but also provide pixel-wise weights to measure the importance of the multi-scale features.Additionally,a dynamic convolution instead of normal convolution is employed to better adapt the network to input data with different distributions.The proposed MSAW-based semantic segmentation network(MSAW-Net)was evaluated on three neuronal soma images from mouse brain and one neuronal soma image from macaque brain,demonstrating the efficiency of the proposed method.It achieved an F1 score of 91.8%on Fezf2-2A-CreER dataset,97.1%on LSL-H2B-GFP dataset,82.8%on Thy1-EGFP-Mline dataset,and 86.9%on macaque dataset,achieving improvements over the 3D U-Net model by 3.1%,3.3%,3.9%,and 2.3%,respectively. 展开更多
关键词 Neuronal soma segmentation semantic segmentation network multi-scale feature extraction adaptive weighting fusion
原文传递
M2ANet:Multi-branch and multi-scale attention network for medical image segmentation
2
作者 Wei Xue Chuanghui Chen +3 位作者 Xuan Qi Jian Qin Zhen Tang Yongsheng He 《Chinese Physics B》 2025年第8期547-559,共13页
Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to ... Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to the inability to effectively capture global information from images,CNNs can easily lead to loss of contours and textures in segmentation results.Notice that the transformer model can effectively capture the properties of long-range dependencies in the image,and furthermore,combining the CNN and the transformer can effectively extract local details and global contextual features of the image.Motivated by this,we propose a multi-branch and multi-scale attention network(M2ANet)for medical image segmentation,whose architecture consists of three components.Specifically,in the first component,we construct an adaptive multi-branch patch module for parallel extraction of image features to reduce information loss caused by downsampling.In the second component,we apply residual block to the well-known convolutional block attention module to enhance the network’s ability to recognize important features of images and alleviate the phenomenon of gradient vanishing.In the third component,we design a multi-scale feature fusion module,in which we adopt adaptive average pooling and position encoding to enhance contextual features,and then multi-head attention is introduced to further enrich feature representation.Finally,we validate the effectiveness and feasibility of the proposed M2ANet method through comparative experiments on four benchmark medical image segmentation datasets,particularly in the context of preserving contours and textures. 展开更多
关键词 medical image segmentation convolutional neural network multi-branch attention multi-scale feature fusion
原文传递
KD-SegNet: Efficient Semantic Segmentation Network with Knowledge Distillation Based on Monocular Camera
3
作者 Thai-Viet Dang Nhu-Nghia Bui Phan Xuan Tan 《Computers, Materials & Continua》 2025年第2期2001-2026,共26页
Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training per... Due to the necessity for lightweight and efficient network models, deploying semantic segmentation models on mobile robots (MRs) is a formidable task. The fundamental limitation of the problem lies in the training performance, the ability to effectively exploit the dataset, and the ability to adapt to complex environments when deploying the model. By utilizing the knowledge distillation techniques, the article strives to overcome the above challenges with the inheritance of the advantages of both the teacher model and the student model. More precisely, the ResNet152-PSP-Net model’s characteristics are utilized to train the ResNet18-PSP-Net model. Pyramid pooling blocks are utilized to decode multi-scale feature maps, creating a complete semantic map inference. The student model not only preserves the strong segmentation performance from the teacher model but also improves the inference speed of the prediction results. The proposed method exhibits a clear advantage over conventional convolutional neural network (CNN) models, as evident from the conducted experiments. Furthermore, the proposed model also shows remarkable improvement in processing speed when compared with light-weight models such as MobileNetV2 and EfficientNet based on latency and throughput parameters. The proposed KD-SegNet model obtains an accuracy of 96.3% and a mIoU (mean Intersection over Union) of 77%, outperforming the performance of existing models by more than 15% on the same training dataset. The suggested method has an average training time that is only 0.51 times less than same field models, while still achieving comparable segmentation performance. Hence, the semantic segmentation frames are collected, forming the motion trajectory for the system in the environment. Overall, this architecture shows great promise for the development of knowledge-based systems for MR’s navigation. 展开更多
关键词 Mobile robot navigation semantic segmentation knowledge distillation pyramid scene parsing fully convolutional networks
在线阅读 下载PDF
3D medical image segmentation using the serial-parallel convolutional neural network and transformer based on crosswindow self-attention
4
作者 Bin Yu Quan Zhou +3 位作者 Li Yuan Huageng Liang Pavel Shcherbakov Xuming Zhang 《CAAI Transactions on Intelligence Technology》 2025年第2期337-348,共12页
Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global featu... Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global feature.The transformer can extract the global information well but adapting it to small medical datasets is challenging and its computational complexity can be heavy.In this work,a serial and parallel network is proposed for the accurate 3D medical image segmentation by combining CNN and transformer and promoting feature interactions across various semantic levels.The core components of the proposed method include the cross window self-attention based transformer(CWST)and multi-scale local enhanced(MLE)modules.The CWST module enhances the global context understanding by partitioning 3D images into non-overlapping windows and calculating sparse global attention between windows.The MLE module selectively fuses features by computing the voxel attention between different branch features,and uses convolution to strengthen the dense local information.The experiments on the prostate,atrium,and pancreas MR/CT image datasets consistently demonstrate the advantage of the proposed method over six popular segmentation models in both qualitative evaluation and quantitative indexes such as dice similarity coefficient,Intersection over Union,95%Hausdorff distance and average symmetric surface distance. 展开更多
关键词 convolution neural network cross window self‐attention medical image segmentation transformer
在线阅读 下载PDF
MultiJSQ:Direct joint segmentation and quantification of left ventricle with deep multitask-derived regression network
5
作者 Xiuquan Du Zheng Pei +3 位作者 Ying Liu Xinzhi Cao Lei Li Shuo Li 《CAAI Transactions on Intelligence Technology》 2025年第1期175-192,共18页
Quantitative analysis of clinical function parameters from MRI images is crucial for diagnosing and assessing cardiovascular disease.However,the manual calculation of these parameters is challenging due to the high va... Quantitative analysis of clinical function parameters from MRI images is crucial for diagnosing and assessing cardiovascular disease.However,the manual calculation of these parameters is challenging due to the high variability among patients and the time-consuming nature of the process.In this study,the authors introduce a framework named MultiJSQ,comprising the feature presentation network(FRN)and the indicator prediction network(IEN),which is designed for simultaneous joint segmentation and quantification.The FRN is tailored for representing global image features,facilitating the direct acquisition of left ventricle(LV)contour images through pixel classification.Additionally,the IEN incorporates specifically designed modules to extract relevant clinical indices.The authors’method considers the interdependence of different tasks,demonstrating the validity of these relationships and yielding favourable results.Through extensive experiments on cardiac MR images from 145 patients,MultiJSQ achieves impressive outcomes,with low mean absolute errors of 124 mm^(2),1.72 mm,and 1.21 mm for areas,dimensions,and regional wall thicknesses,respectively,along with a Dice metric score of 0.908.The experimental findings underscore the excellent performance of our framework in LV segmentation and quantification,highlighting its promising clinical application prospects. 展开更多
关键词 global image features joint segmentation and quantification left ventricle(LV) multitask-derived regression network
在线阅读 下载PDF
A U-Shaped Network-Based Grid Tagging Model for Chinese Named Entity Recognition
6
作者 Yan Xiang Xuedong Zhao +3 位作者 Junjun Guo Zhiliang Shi Enbang Chen Xiaobo Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第6期4149-4167,共19页
Chinese named entity recognition(CNER)has received widespread attention as an important task of Chinese information extraction.Most previous research has focused on individually studying flat CNER,overlapped CNER,or d... Chinese named entity recognition(CNER)has received widespread attention as an important task of Chinese information extraction.Most previous research has focused on individually studying flat CNER,overlapped CNER,or discontinuous CNER.However,a unified CNER is often needed in real-world scenarios.Recent studies have shown that grid tagging-based methods based on character-pair relationship classification hold great potential for achieving unified NER.Nevertheless,how to enrich Chinese character-pair grid representations and capture deeper dependencies between character pairs to improve entity recognition performance remains an unresolved challenge.In this study,we enhance the character-pair grid representation by incorporating both local and global information.Significantly,we introduce a new approach by considering the character-pair grid representation matrix as a specialized image,converting the classification of character-pair relationships into a pixel-level semantic segmentation task.We devise a U-shaped network to extract multi-scale and deeper semantic information from the grid image,allowing for a more comprehensive understanding of associative features between character pairs.This approach leads to improved accuracy in predicting their relationships,ultimately enhancing entity recognition performance.We conducted experiments on two public CNER datasets in the biomedical domain,namely CMeEE-V2 and Diakg.The results demonstrate the effectiveness of our approach,which achieves F1-score improvements of 7.29 percentage points and 1.64 percentage points compared to the current state-of-the-art(SOTA)models,respectively. 展开更多
关键词 Chinese named entity recognition character-pair relation classification grid tagging u-shaped segmentation network
在线阅读 下载PDF
Improved Convolutional Neural Network for Traffic Scene Segmentation 被引量:1
7
作者 Fuliang Xu Yong Luo +1 位作者 Chuanlong Sun Hong Zhao 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第3期2691-2708,共18页
In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhanc... In actual traffic scenarios,precise recognition of traffic participants,such as vehicles and pedestrians,is crucial for intelligent transportation.This study proposes an improved algorithm built on Mask-RCNN to enhance the ability of autonomous driving systems to recognize traffic participants.The algorithmincorporates long and shortterm memory networks and the fused attention module(GSAM,GCT,and Spatial Attention Module)to enhance the algorithm’s capability to process both global and local information.Additionally,to increase the network’s initial operation stability,the original network activation function was replaced with Gaussian error linear unit.Experiments were conducted using the publicly available Cityscapes dataset.Comparing the test results,it was observed that the revised algorithmoutperformed the original algorithmin terms of AP_(50),AP_(75),and othermetrics by 8.7%and 9.6%for target detection and 12.5%and 13.3%for segmentation. 展开更多
关键词 Instance segmentation deep learning convolutional neural network attention mechanism
在线阅读 下载PDF
Visual Perception and Adaptive Scene Analysis with Autonomous Panoptic Segmentation
8
作者 Darthy Rabecka V Britto Pari J Man-Fai Leung 《Computers, Materials & Continua》 2025年第10期827-853,共27页
Techniques in deep learning have significantly boosted the accuracy and productivity of computer vision segmentation tasks.This article offers an intriguing architecture for semantic,instance,and panoptic segmentation... Techniques in deep learning have significantly boosted the accuracy and productivity of computer vision segmentation tasks.This article offers an intriguing architecture for semantic,instance,and panoptic segmentation using EfficientNet-B7 and Bidirectional Feature Pyramid Networks(Bi-FPN).When implemented in place of the EfficientNet-B5 backbone,EfficientNet-B7 strengthens the model’s feature extraction capabilities and is far more appropriate for real-world applications.By ensuring superior multi-scale feature fusion,Bi-FPN integration enhances the segmentation of complex objects across various urban environments.The design suggested is examined on rigorous datasets,encompassing Cityscapes,Common Objects in Context,KITTI Karlsruhe Institute of Technology and Toyota Technological Institute,and Indian Driving Dataset,which replicate numerous real-world driving conditions.During extensive training,validation,and testing,the model showcases major gains in segmentation accuracy and surpasses state-of-the-art performance in semantic,instance,and panoptic segmentation tasks.Outperforming present methods,the recommended approach generates noteworthy gains in Panoptic Quality:+0.4%on Cityscapes,+0.2%on COCO,+1.7%on KITTI,and+0.4%on IDD.These changes show just how efficient it is in various driving circumstances and datasets.This study emphasizes the potential of EfficientNet-B7 and Bi-FPN to provide dependable,high-precision segmentation in computer vision applications,primarily autonomous driving.The research results suggest that this framework efficiently tackles the constraints of practical situations while delivering a robust solution for high-performance tasks involving segmentation. 展开更多
关键词 Panoptic segmentation multi-scale features efficient net-B7 Feature Pyramid network
在线阅读 下载PDF
Segmenting identified fracture families from 3D fracture networks in Montney rock using a deep learning-based method
9
作者 Mei Li Giovanni Grasselli 《Journal of Rock Mechanics and Geotechnical Engineering》 2025年第10期6120-6129,共10页
Fractures are critical to subsurface activities such as oil and gas extraction,geothermal energy production,and carbon storage.Hydraulic fracturing,a technique that enhances fluid production,creates complex fracture n... Fractures are critical to subsurface activities such as oil and gas extraction,geothermal energy production,and carbon storage.Hydraulic fracturing,a technique that enhances fluid production,creates complex fracture networks within rock formations containing natural discontinuities.Accurately distinguishing between hydraulically induced fractures and pre-existing discontinuities is essential for understanding hydraulic fracture mechanisms.However,this remains challenging due to the interconnected nature of fractures in three-dimensional(3D)space.Manual segmentation,while adaptive,is both labor-intensive and subjective,making it impractical for large-scale 3D datasets.This study introduces a deep learning-based progressive cross-sectional segmentation method to automate the classification of 3D fracture volumes.The proposed method was applied to a 3D hydraulic fracture network in a Montney cube sample,successfully segmenting natural fractures,parted bedding planes,and hydraulic fractures with minimal user intervention.The automated approach achieves a 99.6%reduction in manual image processing workload while maintaining high segmentation accuracy,with test accuracy exceeding 98%and F1-score over 84%.This approach generalizes well to Brazilian disc samples with different fracture patterns,achieving consistently high accuracy in distinguishing between bedding and non-bedding fractures.This automated fracture segmentation method offers an effective tool for enhanced quantitative characterization of fracture networks,which would contribute to a deeper understanding of hydraulic fracturing processes. 展开更多
关键词 True-triaxial hydraulic fracturing Shale fracture network Serial section image Machine learning Image segmentation
在线阅读 下载PDF
Lymph node disease in 2-deoxy-2-fluorodeoxyglucose positron emission tomography/computed tomography imaging:Advances in artificial intelligence-driven automatic segmentation and precise diagnosis
10
作者 Shao-Chun Li Xin Fan Jian He 《World Journal of Clinical Oncology》 2025年第11期90-102,共13页
Imaging evaluation of lymph node metastasis and infiltration faces problems such as low artificial outline efficiency and insufficient consistency.Deep learning technology based on convolutional neural networks has gr... Imaging evaluation of lymph node metastasis and infiltration faces problems such as low artificial outline efficiency and insufficient consistency.Deep learning technology based on convolutional neural networks has greatly improved the technical effect of radiomics in lymph node pathological characteristics analysis and efficacy monitoring through automatic lymph node detection,precise segmentation and three-dimensional reconstruction algorithms.This review focuses on the automatic lymph node segmentation model,treatment response prediction algorithm and benign and malignant differential diagnosis system for multimodal imaging,in order to provide a basis for further research on artificial intelligence to assist lymph node disease management and clinical decision-making,and provide a reference for promoting the construction of a system for accurate diagnosis,personalized treatment and prognostic evaluation of lymph node-related diseases. 展开更多
关键词 Lymph node metastasis LYMPHOMA Deep learning Convolutional neural network Medical imaging analysis Automatic segmentation Radiomics
暂未订购
Multi-Level Parallel Network for Brain Tumor Segmentation
11
作者 Juhong Tie Hui Peng 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第4期741-757,共17页
Accurate automatic segmentation of gliomas in various sub-regions,including peritumoral edema,necrotic core,and enhancing and non-enhancing tumor core from 3D multimodal MRI images,is challenging because of its highly... Accurate automatic segmentation of gliomas in various sub-regions,including peritumoral edema,necrotic core,and enhancing and non-enhancing tumor core from 3D multimodal MRI images,is challenging because of its highly heterogeneous appearance and shape.Deep convolution neural networks(CNNs)have recently improved glioma segmentation performance.However,extensive down-sampling such as pooling or stridden convolution in CNNs significantly decreases the initial image resolution,resulting in the loss of accurate spatial and object parts information,especially information on the small sub-region tumors,affecting segmentation performance.Hence,this paper proposes a novel multi-level parallel network comprising three different level parallel subnetworks to fully use low-level,mid-level,and high-level information and improve the performance of brain tumor segmentation.We also introduce the Combo loss function to address input class imbalance and false positives and negatives imbalance in deep learning.The proposed method is trained and validated on the BraTS 2020 training and validation dataset.On the validation dataset,ourmethod achieved a mean Dice score of 0.907,0.830,and 0.787 for the whole tumor,tumor core,and enhancing tumor core,respectively.Compared with state-of-the-art methods,the multi-level parallel network has achieved competitive results on the validation dataset. 展开更多
关键词 Convolution neural network brain tumor segmentation parallel network
在线阅读 下载PDF
DAUNet: Detail-Aware U-Shaped Network for 2D Human Pose Estimation
12
作者 Xi Li Yuxin Li +2 位作者 Zhenhua Xiao Zhenghua Huang Lianying Zou 《Computers, Materials & Continua》 SCIE EI 2024年第11期3325-3349,共25页
Human pose estimation is a critical research area in the field of computer vision,playing a significant role in applications such as human-computer interaction,behavior analysis,and action recognition.In this paper,we... Human pose estimation is a critical research area in the field of computer vision,playing a significant role in applications such as human-computer interaction,behavior analysis,and action recognition.In this paper,we propose a U-shaped keypoint detection network(DAUNet)based on an improved ResNet subsampling structure and spatial grouping mechanism.This network addresses key challenges in traditional methods,such as information loss,large network redundancy,and insufficient sensitivity to low-resolution features.DAUNet is composed of three main components.First,we introduce an improved BottleNeck block that employs partial convolution and strip pooling to reduce computational load and mitigate feature loss.Second,after upsampling,the network eliminates redundant features,improving the overall efficiency.Finally,a lightweight spatial grouping attention mechanism is applied to enhance low-resolution semantic features within the feature map,allowing for better restoration of the original image size and higher accuracy.Experimental results demonstrate that DAUNet achieves superior accuracy compared to most existing keypoint detection models,with a mean PCKh@0.5 score of 91.6%on the MPII dataset and an AP of 76.1%on the COCO dataset.Moreover,real-world experiments further validate the robustness and generalizability of DAUNet for detecting human bodies in unknown environments,highlighting its potential for broader applications. 展开更多
关键词 Human pose estimation keypoint detection u-shaped network architecture spatial grouping mechanism
在线阅读 下载PDF
UNet Based onMulti-Object Segmentation and Convolution Neural Network for Object Recognition
13
作者 Nouf Abdullah Almujally Bisma Riaz Chughtai +4 位作者 Naif Al Mudawi Abdulwahab Alazeb Asaad Algarni Hamdan A.Alzahrani Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2024年第7期1563-1580,共18页
The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integrat... The recent advancements in vision technology have had a significant impact on our ability to identify multiple objects and understand complex scenes.Various technologies,such as augmented reality-driven scene integration,robotic navigation,autonomous driving,and guided tour systems,heavily rely on this type of scene comprehension.This paper presents a novel segmentation approach based on the UNet network model,aimed at recognizing multiple objects within an image.The methodology begins with the acquisition and preprocessing of the image,followed by segmentation using the fine-tuned UNet architecture.Afterward,we use an annotation tool to accurately label the segmented regions.Upon labeling,significant features are extracted from these segmented objects,encompassing KAZE(Accelerated Segmentation and Extraction)features,energy-based edge detection,frequency-based,and blob characteristics.For the classification stage,a convolution neural network(CNN)is employed.This comprehensive methodology demonstrates a robust framework for achieving accurate and efficient recognition of multiple objects in images.The experimental results,which include complex object datasets like MSRC-v2 and PASCAL-VOC12,have been documented.After analyzing the experimental results,it was found that the PASCAL-VOC12 dataset achieved an accuracy rate of 95%,while the MSRC-v2 dataset achieved an accuracy of 89%.The evaluation performed on these diverse datasets highlights a notably impressive level of performance. 展开更多
关键词 UNet segmentation BLOB fourier transform convolution neural network
在线阅读 下载PDF
Efficient Object Segmentation and Recognition Using Multi-Layer Perceptron Networks
14
作者 Aysha Naseer Nouf Abdullah Almujally +2 位作者 Saud S.Alotaibi Abdulwahab Alazeb Jeongmin Park 《Computers, Materials & Continua》 SCIE EI 2024年第1期1381-1398,共18页
Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on ... Object segmentation and recognition is an imperative area of computer vision andmachine learning that identifies and separates individual objects within an image or video and determines classes or categories based on their features.The proposed system presents a distinctive approach to object segmentation and recognition using Artificial Neural Networks(ANNs).The system takes RGB images as input and uses a k-means clustering-based segmentation technique to fragment the intended parts of the images into different regions and label thembased on their characteristics.Then,two distinct kinds of features are obtained from the segmented images to help identify the objects of interest.An Artificial Neural Network(ANN)is then used to recognize the objects based on their features.Experiments were carried out with three standard datasets,MSRC,MS COCO,and Caltech 101 which are extensively used in object recognition research,to measure the productivity of the suggested approach.The findings from the experiment support the suggested system’s validity,as it achieved class recognition accuracies of 89%,83%,and 90.30% on the MSRC,MS COCO,and Caltech 101 datasets,respectively. 展开更多
关键词 K-region fusion segmentation recognition feature extraction artificial neural network computer vision
在线阅读 下载PDF
DCFNet:An Effective Dual-Branch Cross-Attention Fusion Network for Medical Image Segmentation
15
作者 Chengzhang Zhu Renmao Zhang +5 位作者 Yalong Xiao Beiji Zou Xian Chai Zhangzheng Yang Rong Hu Xuanchu Duan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第7期1103-1128,共26页
Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Trans... Automatic segmentation of medical images provides a reliable scientific basis for disease diagnosis and analysis.Notably,most existing methods that combine the strengths of convolutional neural networks(CNNs)and Transformers have made significant progress.However,there are some limitations in the current integration of CNN and Transformer technology in two key aspects.Firstly,most methods either overlook or fail to fully incorporate the complementary nature between local and global features.Secondly,the significance of integrating the multiscale encoder features from the dual-branch network to enhance the decoding features is often disregarded in methods that combine CNN and Transformer.To address this issue,we present a groundbreaking dual-branch cross-attention fusion network(DCFNet),which efficiently combines the power of Swin Transformer and CNN to generate complementary global and local features.We then designed the Feature Cross-Fusion(FCF)module to efficiently fuse local and global features.In the FCF,the utilization of the Channel-wise Cross-fusion Transformer(CCT)serves the purpose of aggregatingmulti-scale features,and the Feature FusionModule(FFM)is employed to effectively aggregate dual-branch prominent feature regions from the spatial perspective.Furthermore,within the decoding phase of the dual-branch network,our proposed Channel Attention Block(CAB)aims to emphasize the significance of the channel features between the up-sampled features and the features generated by the FCFmodule to enhance the details of the decoding.Experimental results demonstrate that DCFNet exhibits enhanced accuracy in segmentation performance.Compared to other state-of-the-art(SOTA)methods,our segmentation framework exhibits a superior level of competitiveness.DCFNet’s accurate segmentation of medical images can greatly assist medical professionals in making crucial diagnoses of lesion areas in advance. 展开更多
关键词 Convolutional neural networks Swin Transformer dual branch medical image segmentation feature cross fusion
在线阅读 下载PDF
SGT-Net: A Transformer-Based Stratified Graph Convolutional Network for 3D Point Cloud Semantic Segmentation
16
作者 Suyi Liu Jianning Chi +2 位作者 Chengdong Wu Fang Xu Xiaosheng Yu 《Computers, Materials & Continua》 SCIE EI 2024年第6期4471-4489,共19页
In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and... In recent years,semantic segmentation on 3D point cloud data has attracted much attention.Unlike 2D images where pixels distribute regularly in the image domain,3D point clouds in non-Euclidean space are irregular and inherently sparse.Therefore,it is very difficult to extract long-range contexts and effectively aggregate local features for semantic segmentation in 3D point cloud space.Most current methods either focus on local feature aggregation or long-range context dependency,but fail to directly establish a global-local feature extractor to complete the point cloud semantic segmentation tasks.In this paper,we propose a Transformer-based stratified graph convolutional network(SGT-Net),which enlarges the effective receptive field and builds direct long-range dependency.Specifically,we first propose a novel dense-sparse sampling strategy that provides dense local vertices and sparse long-distance vertices for subsequent graph convolutional network(GCN).Secondly,we propose a multi-key self-attention mechanism based on the Transformer to further weight augmentation for crucial neighboring relationships and enlarge the effective receptive field.In addition,to further improve the efficiency of the network,we propose a similarity measurement module to determine whether the neighborhood near the center point is effective.We demonstrate the validity and superiority of our method on the S3DIS and ShapeNet datasets.Through ablation experiments and segmentation visualization,we verify that the SGT model can improve the performance of the point cloud semantic segmentation. 展开更多
关键词 3D point cloud semantic segmentation long-range contexts global-local feature graph convolutional network dense-sparse sampling strategy
在线阅读 下载PDF
MAAUNet:Exploration of U-shaped encoding and decoding structure for semantic segmentation of medical image 被引量:1
17
作者 SHAO Shuo GE Hongwei 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2022年第4期418-429,共12页
In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggreg... In view of the problems of multi-scale changes of segmentation targets,noise interference,rough segmentation results and slow training process faced by medical image semantic segmentation,a multi-scale residual aggregation U-shaped attention network structure of MAAUNet(MultiRes aggregation attention UNet)is proposed based on MultiResUNet.Firstly,aggregate connection is introduced from the original feature aggregation at the same level.Skip connection is redesigned to aggregate features of different semantic scales at the decoder subnet,and the problem of semantic gaps is further solved that may exist between skip connections.Secondly,after the multi-scale convolution module,a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space to adaptively optimize the intermediate feature map.Finally,the original convolution block is improved.The convolution channels are expanded with a series convolution structure to complement each other and extract richer spatial features.Residual connections are retained and the convolution block is turned into a multi-channel convolution block.The model is made to extract multi-scale spatial features.The experimental results show that MAAUNet has strong competitiveness in challenging datasets,and shows good segmentation performance and stability in dealing with multi-scale input and noise interference. 展开更多
关键词 u-shaped attention network structure of MAAUNet convolutional neural network encoding-decoding structure attention mechanism medical image semantic segmentation
在线阅读 下载PDF
The segmentation of debris-flow fans based on local features and spatial attention mechanism 被引量:2
18
作者 SONG Xin WANG Baoyun 《Journal of Geographical Sciences》 SCIE CSCD 2024年第12期2534-2550,共17页
In response to issues such as incomplete segmentation and the presence of breakpoints encountered in extracting debris-flow fans using semantic segmentation models,this paper proposes a local feature and spatial atten... In response to issues such as incomplete segmentation and the presence of breakpoints encountered in extracting debris-flow fans using semantic segmentation models,this paper proposes a local feature and spatial attention mechanism to achieve precise segmentation of debris-flow fans.Firstly,leveraging the spatial inhibition mechanism from neuroscience theory as a foundation,an energy function for the local feature and spatial attention mechanism is formulated.Subsequently,by employing optimization theory,a closed-form solution for the energy function is derived,which ensures the lightweight nature of the proposed attention mechanism algorithm.Finally,the performance of this algorithm is compared with other mainstream attention mechanism algorithms embedded in semantic segmentation models through comparative experiments.Experimental results demonstrate that the proposed method outperforms both the original models and mainstream attention mechanisms across various classic models,effectively enhancing the performance of network models in debris-flow fan segmentation tasks. 展开更多
关键词 loess geological hazards semantic segmentation convolutional neural network debris-flow fans attention mechanism
原文传递
Image Semantic Segmentation Approach for Studying Human Behavior on Image Data 被引量:1
19
作者 ZHENG Zhan CHEN Da HUANG Yanrong 《Wuhan University Journal of Natural Sciences》 CAS CSCD 2024年第2期145-153,共9页
Image semantic segmentation is an essential technique for studying human behavior through image data.This paper proposes an image semantic segmentation method for human behavior research.Firstly,an end-to-end convolut... Image semantic segmentation is an essential technique for studying human behavior through image data.This paper proposes an image semantic segmentation method for human behavior research.Firstly,an end-to-end convolutional neural network architecture is proposed,which consists of a depth-separable jump-connected fully convolutional network and a conditional random field network;then jump-connected convolution is used to classify each pixel in the image,and an image semantic segmentation method based on convolu-tional neural network is proposed;and then a conditional random field network is used to improve the effect of image segmentation of hu-man behavior and a linear modeling and nonlinear modeling method based on the semantic segmentation of conditional random field im-age is proposed.Finally,using the proposed image segmentation network,the input entrepreneurial image data is semantically segmented to obtain the contour features of the person;and the segmentation of the images in the medical field.The experimental results show that the image semantic segmentation method is effective.It is a new way to use image data to study human behavior and can be extended to other research areas. 展开更多
关键词 human behavior research image semantic segmentation hop-connected full convolution network conditional random field network deep learning
原文传递
Rethinking the Encoder-decoder Structure in Medical Image Segmentation from Releasing Decoder Structure 被引量:1
20
作者 Jiajia Ni Wei Mu +1 位作者 An Pan Zhengming Chen 《Journal of Bionic Engineering》 SCIE EI CSCD 2024年第3期1511-1521,共11页
Medical image segmentation has witnessed rapid advancements with the emergence of encoder-decoder based methods.In the encoder-decoder structure,the primary goal of the decoding phase is not only to restore feature ma... Medical image segmentation has witnessed rapid advancements with the emergence of encoder-decoder based methods.In the encoder-decoder structure,the primary goal of the decoding phase is not only to restore feature map resolution,but also to mitigate the loss of feature information incurred during the encoding phase.However,this approach gives rise to a challenge:multiple up-sampling operations in the decoder segment result in the loss of feature information.To address this challenge,we propose a novel network that removes the decoding structure to reduce feature information loss(CBL-Net).In particular,we introduce a Parallel Pooling Module(PPM)to counteract the feature information loss stemming from conventional and pooling operations during the encoding stage.Furthermore,we incorporate a Multiplexed Dilation Convolution(MDC)module to expand the network's receptive field.Also,although we have removed the decoding stage,we still need to recover the feature map resolution.Therefore,we introduced the Global Feature Recovery(GFR)module.It uses attention mechanism for the image feature map resolution recovery,which can effectively reduce the loss of feature information.We conduct extensive experimental evaluations on three publicly available medical image segmentation datasets:DRIVE,CHASEDB and MoNuSeg datasets.Experimental results show that our proposed network outperforms state-of-the-art methods in medical image segmentation.In addition,it achieves higher efficiency than the current network of coding and decoding structures by eliminating the decoding component. 展开更多
关键词 Medical image segmentation Encoder-decoder architecture Attention mechanisms Releasing decoder architecture Neural network
在线阅读 下载PDF
上一页 1 2 206 下一页 到第
使用帮助 返回顶部