期刊文献+
共找到3,980篇文章
< 1 2 199 >
每页显示 20 50 100
Global context-aware multi-scale feature iterative refinement for aviation-road traffic semantic segmentation
1
作者 Mengyue ZHANG Shichun YANG +1 位作者 Xinjie FENG Yaoguang CAO 《Chinese Journal of Aeronautics》 2026年第2期429-441,共13页
Semantic segmentation for mixed scenes of aerial remote sensing and road traffic is one of the key technologies for visual perception of flying cars.The State-of-the-Art(SOTA)semantic segmentation methods have made re... Semantic segmentation for mixed scenes of aerial remote sensing and road traffic is one of the key technologies for visual perception of flying cars.The State-of-the-Art(SOTA)semantic segmentation methods have made remarkable achievements in both fine-grained segmentation and real-time performance.However,when faced with the huge differences in scale and semantic categories brought about by the mixed scenes of aerial remote sensing and road traffic,they still face great challenges and there is little related research.Addressing the above issue,this paper proposes a semantic segmentation model specifically for mixed datasets of aerial remote sensing and road traffic scenes.First,a novel decoding-recoding multi-scale feature iterative refinement structure is proposed,which utilizes the re-integration and continuous enhancement of multi-scale information to effectively deal with the huge scale differences between cross-domain scenes,while using a fully convolutional structure to ensure the lightweight and real-time requirements.Second,a welldesigned cross-window attention mechanism combined with a global information integration decoding block forms an enhanced global context perception,which can effectively capture the long-range dependencies and multi-scale global context information of different scenes,thereby achieving fine-grained semantic segmentation.The proposed method is tested on a large-scale mixed dataset of aerial remote sensing and road traffic scenes.The results confirm that it can effectively deal with the problem of large-scale differences in cross-domain scenes.Its segmentation accuracy surpasses that of the SOTA methods,which meets the real-time requirements. 展开更多
关键词 Aviation-road traffic Flying cars Global context-aware Multi-scale feature iterative refinement Semantic segmentation
原文传递
DMHFR:Decoder with Multi-Head Feature Receptors for Tract Image Segmentation
2
作者 Jianuo Huang Bohan Lai +2 位作者 Weiye Qiu Caixu Xu Jie He 《Computers, Materials & Continua》 2025年第3期4841-4862,共22页
The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships ... The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships between pixels requires further improvement.Previous methods face challenges in efficiently managing multi-scale fea-tures of different granularities from the encoder backbone,leaving room for improvement in their global representation and feature extraction capabilities.To address these challenges,we propose a novel Decoder with Multi-Head Feature Receptors(DMHFR),which receives multi-scale features from the encoder backbone and organizes them into three feature groups with different granularities:coarse,fine-grained,and full set.These groups are subsequently processed by Multi-Head Feature Receptors(MHFRs)after feature capture and modeling operations.MHFRs include two Three-Head Feature Receptors(THFRs)and one Four-Head Feature Receptor(FHFR).Each group of features is passed through these MHFRs and then fed into axial transformers,which help the model capture long-range dependencies within the features.The three MHFRs produce three distinct feature outputs.The output from the FHFR serves as auxiliary auxiliary features in the prediction head,and the prediction output and their losses will eventually be aggregated.Experimental results show that the Transformer using DMHFR outperforms 15 state of the arts(SOTA)methods on five public datasets.Specifically,it achieved significant improvements in mean DICE scores over the classic Parallel Reverse Attention Network(PraNet)method,with gains of 4.1%,2.2%,1.4%,8.9%,and 16.3%on the CVC-ClinicDB,Kvasir-SEG,CVC-T,CVC-ColonDB,and ETIS-LaribPolypDB datasets,respectively. 展开更多
关键词 Medical image segmentation feature exploration feature aggregation deep learning multi-head feature receptor
在线阅读 下载PDF
CG-FCLNet:Category-Guided Feature Collaborative Learning Network for Semantic Segmentation of Remote Sensing Images
3
作者 Min Yao Guangjie Hu Yaozu Zhang 《Computers, Materials & Continua》 2025年第5期2751-2771,共21页
Semantic segmentation of remote sensing images is a critical research area in the field of remote sensing.Despite the success of Convolutional Neural Networks(CNNs),they often fail to capture inter-layer feature relat... Semantic segmentation of remote sensing images is a critical research area in the field of remote sensing.Despite the success of Convolutional Neural Networks(CNNs),they often fail to capture inter-layer feature relationships and fully leverage contextual information,leading to the loss of important details.Additionally,due to significant intraclass variation and small inter-class differences in remote sensing images,CNNs may experience class confusion.To address these issues,we propose a novel Category-Guided Feature Collaborative Learning Network(CG-FCLNet),which enables fine-grained feature extraction and adaptive fusion.Specifically,we design a Feature Collaborative Learning Module(FCLM)to facilitate the tight interaction of multi-scale features.We also introduce a Scale-Aware Fusion Module(SAFM),which iteratively fuses features from different layers using a spatial attention mechanism,enabling deeper feature fusion.Furthermore,we design a Category-Guided Module(CGM)to extract category-aware information that guides feature fusion,ensuring that the fused featuresmore accurately reflect the semantic information of each category,thereby improving detailed segmentation.The experimental results show that CG-FCLNet achieves a Mean Intersection over Union(mIoU)of 83.46%,an mF1 of 90.87%,and an Overall Accuracy(OA)of 91.34% on the Vaihingen dataset.On the Potsdam dataset,it achieves a mIoU of 86.54%,an mF1 of 92.65%,and an OA of 91.29%.These results highlight the superior performance of CG-FCLNet compared to existing state-of-the-art methods. 展开更多
关键词 Semantic segmentation remote sensing feature context interaction attentionmodule category-guided module
在线阅读 下载PDF
RFLE-Net:Refined Feature Extraction and Low-Loss Feature Fusion Method in Semantic Segmentation of Medical Images
4
作者 Fan Zhang Zihao Zhang +5 位作者 Huifang Hou Yale Yang Kangzhan Xie Chao Fan Xiaozhen Ren Quan Pan 《Journal of Bionic Engineering》 2025年第3期1557-1572,共16页
The application of transformer networks and feature fusion models in medical image segmentation has aroused considerable attention within the academic circle.Nevertheless,two main obstacles persist:(1)the restrictions... The application of transformer networks and feature fusion models in medical image segmentation has aroused considerable attention within the academic circle.Nevertheless,two main obstacles persist:(1)the restrictions of the Transformer network in dealing with locally detailed features,and(2)the considerable loss of feature information in current feature fusion modules.To solve these issues,this study initially presents a refined feature extraction approach,employing a double-branch feature extraction network to capture complex multi-scale local and global information from images.Subsequently,we proposed a low-loss feature fusion method-Multi-branch Feature Fusion Enhancement Module(MFFEM),which realizes effective feature fusion with minimal loss.Simultaneously,the cross-layer cross-attention fusion module(CLCA)is adopted to further achieve adequate feature fusion by enhancing the interaction between encoders and decoders of various scales.Finally,the feasibility of our method was verified using the Synapse and ACDC datasets,demonstrating its competitiveness.The average DSC(%)was 83.62 and 91.99 respectively,and the average HD95(mm)was reduced to 19.55 and 1.15 respectively. 展开更多
关键词 Multi-organ medical image segmentation Fine-grained dual branch feature extractor Low-Loss feature fusion module
在线阅读 下载PDF
EFFECTIVE FEATURE ANALYSIS FOR COLOR IMAGE SEGMENTATION 被引量:2
5
作者 黎宁 毛四新 李有福 《Transactions of Nanjing University of Aeronautics and Astronautics》 EI 2001年第2期206-212,共7页
An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. The determination of effective color features depen... An approach for color image segmentation is proposed based on the contributions of color features to segmentation rather than the choice of a particular color space. The determination of effective color features depends on the analysis of various color features from each tested color image via the designed feature encoding. It is different from the pervious methods where self organized feature map (SOFM) is used for constructing the feature encoding so that the feature encoding can self organize the effective features for different color images. Fuzzy clustering is applied for the final segmentation when the well suited color features and the initial parameter are available. The proposed method has been applied in segmenting different types of color images and the experimental results show that it outperforms the classical clustering method. The study shows that the feature encoding approach offers great promise in automating and optimizing the segmentation of color images. 展开更多
关键词 image segmentation color image neural networks fuzzy clustering feature encoding
在线阅读 下载PDF
Research on Camouflage Target Detection Method Based on Edge Guidance and Multi-Scale Feature Fusion
6
作者 Tianze Yu Jianxun Zhang Hongji Chen 《Computers, Materials & Continua》 2026年第4期1676-1697,共22页
Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the backgroun... Camouflaged Object Detection(COD)aims to identify objects that share highly similar patterns—such as texture,intensity,and color—with their surrounding environment.Due to their intrinsic resemblance to the background,camouflaged objects often exhibit vague boundaries and varying scales,making it challenging to accurately locate targets and delineate their indistinct edges.To address this,we propose a novel camouflaged object detection network called Edge-Guided and Multi-scale Fusion Network(EGMFNet),which leverages edge-guided multi-scale integration for enhanced performance.The model incorporates two innovative components:a Multi-scale Fusion Module(MSFM)and an Edge-Guided Attention Module(EGA).These designs exploit multi-scale features to uncover subtle cues between candidate objects and the background while emphasizing camouflaged object boundaries.Moreover,recognizing the rich contextual information in fused features,we introduce a Dual-Branch Global Context Module(DGCM)to refine features using extensive global context,thereby generatingmore informative representations.Experimental results on four benchmark datasets demonstrate that EGMFNet outperforms state-of-the-art methods across five evaluation metrics.Specifically,on COD10K,our EGMFNet-P improves F_(β)by 4.8 points and reduces mean absolute error(MAE)by 0.006 compared with ZoomNeXt;on NC4K,it achieves a 3.6-point increase in F_(β).OnCAMO and CHAMELEON,it obtains 4.5-point increases in F_(β),respectively.These consistent gains substantiate the superiority and robustness of EGMFNet. 展开更多
关键词 Camouflaged object detection multi-scale feature fusion edge-guided image segmentation
在线阅读 下载PDF
Self-supervised pre-training based hybrid network for deep gray matter nuclei segmentation
7
作者 Yang Deng Jiaxiu Xi +1 位作者 Zhong Chen Lijun Bao 《Magnetic Resonance Letters》 2026年第1期53-65,共13页
The accurate segmentation of deep gray matter nuclei is critical for neuropathological research,disease diagnosis and treatment.Existing methods employ the supervised learning training approach,which requires large la... The accurate segmentation of deep gray matter nuclei is critical for neuropathological research,disease diagnosis and treatment.Existing methods employ the supervised learning training approach,which requires large labeled datasets.It is challenging and time-consuming to obtain such datasets for medical image analysis.In addition,these methods based on convolutional neural networks(CNNs)only achieve suboptimal performance due to the locality of convolutional operations.Vision Transformers(ViTs)efficiently model long-range dependencies and thus have the potentiality to outperform these methods in segmentation tasks.To address these issues,we propose a novel hybrid network based on self-supervised pre-training for deep gray matter nuclei segmentation.Specifically,we present a CNN-Transformer hybrid network(CTNet),whose encoder consists of 3D CNN and ViT to learn local spatial-detailed features and global semantic information.A self-supervised learning(SSL)approach that integrates rotation prediction and masked feature reconstruction is proposed to pre-train the CTNet,enabling the model to learn valuable visual representations from unlabeled data.We evaluate the effectiveness of our method on 3T and 7T human brain MRI datasets.The results demonstrate that our CTNet achieves better performance than other comparison models and our pre-training strategy outperforms other advanced self-supervised methods.When the training set has only one sample,our pre-trained CTNet enhances segmentation performance,showing an 8.4%improvement in Dice similarity coefficient(DSC)compared to the randomly initialized CTNet. 展开更多
关键词 Deep gray matter nuclei segmentation Self-supervised learning Rotation prediction Masked feature reconstruction TRANSFORMER
在线阅读 下载PDF
MSC-Deep LabV3+:A Segmentation Model for Slender Fabric Roll Seam Detection
8
作者 Weimin Shi Kuntao Lv +1 位作者 Chang Xuan Ji Wu 《Computers, Materials & Continua》 2026年第5期480-498,共19页
The application of deep learning in fabric defect detection has become increasingly widespread.To address false positives and false negatives in fabric roll seam detection,and to improve automation efficiency and prod... The application of deep learning in fabric defect detection has become increasingly widespread.To address false positives and false negatives in fabric roll seam detection,and to improve automation efficiency and product quality,we propose the Multi-scale Context DeepLabV3+(MSC-DeepLabV3+),a semantic segmentation network designed for fabric roll seam detection,based on DeepLabV3+.The model improvements include enhancing the backbone performance through optimization of the UIB-MobileNetV2 network;designing the Dynamic Atrous and Sliding-window Fusion(DASF)module to improve adaptability to multi-scale seam structures with dynamic dilation rates and a sliding-window mechanism;and utilizing the Progressive Low-level Feature Fusion(PLFF)module to progressively restore seam boundary details via shallow feature fusion.Additionally,an enhanced 3-SE attention mechanism is employed,replacing the direct concatenation operation.Experimental results show thatMSCDeepLabV3+outperforms classical and recent segmentation models.Compared to DeepLabV3+with an Xception backbone,MSC-DeepLabV3+achieves a mean intersection over union(mIoU)of 92.30%and the boundary Fscore(BF)of 92.54%,representing improvements of 3.04%and 3.14%,respectively.Moreover,the model complexity is significantly reduced,with the model parameters(params)decreasing to 3.44M and Frames Per Second(FPS)increasing from 101 to 273,demonstrating its potential for deployment in resource-constrained industrial scenarios. 展开更多
关键词 Fabric roll seam detection semantic segmentation deep learning lightweight network multi-scale feature extraction improved attention mechanism
在线阅读 下载PDF
Augmented Deep Multi-Granularity Pose-Aware Feature Fusion Network for Visible-Infrared Person Re-Identification 被引量:3
9
作者 Zheng Shi Wanru Song +1 位作者 Junhao Shan Feng Liu 《Computers, Materials & Continua》 SCIE EI 2023年第12期3467-3488,共22页
Visible-infrared Cross-modality Person Re-identification(VI-ReID)is a critical technology in smart public facilities such as cities,campuses and libraries.It aims to match pedestrians in visible light and infrared ima... Visible-infrared Cross-modality Person Re-identification(VI-ReID)is a critical technology in smart public facilities such as cities,campuses and libraries.It aims to match pedestrians in visible light and infrared images for video surveillance,which poses a challenge in exploring cross-modal shared information accurately and efficiently.Therefore,multi-granularity feature learning methods have been applied in VI-ReID to extract potential multi-granularity semantic information related to pedestrian body structure attributes.However,existing research mainly uses traditional dual-stream fusion networks and overlooks the core of cross-modal learning networks,the fusion module.This paper introduces a novel network called the Augmented Deep Multi-Granularity Pose-Aware Feature Fusion Network(ADMPFF-Net),incorporating the Multi-Granularity Pose-Aware Feature Fusion(MPFF)module to generate discriminative representations.MPFF efficiently explores and learns global and local features with multi-level semantic information by inserting disentangling and duplicating blocks into the fusion module of the backbone network.ADMPFF-Net also provides a new perspective for designing multi-granularity learning networks.By incorporating the multi-granularity feature disentanglement(mGFD)and posture information segmentation(pIS)strategies,it extracts more representative features concerning body structure information.The Local Information Enhancement(LIE)module augments high-performance features in VI-ReID,and the multi-granularity joint loss supervises model training for objective feature learning.Experimental results on two public datasets show that ADMPFF-Net efficiently constructs pedestrian feature representations and enhances the accuracy of VI-ReID. 展开更多
关键词 Visible-infrared person re-identification multi-granularity feature learning modality
在线阅读 下载PDF
CFM-UNet:A Joint CNN and Transformer Network via Cross Feature Modulation for Remote Sensing Images Segmentation 被引量:8
10
作者 Min WANG Peidong WANG 《Journal of Geodesy and Geoinformation Science》 CSCD 2023年第4期40-47,共8页
The semantic segmentation methods based on CNN have made great progress,but there are still some shortcomings in the application of remote sensing images segmentation,such as the small receptive field can not effectiv... The semantic segmentation methods based on CNN have made great progress,but there are still some shortcomings in the application of remote sensing images segmentation,such as the small receptive field can not effectively capture global context.In order to solve this problem,this paper proposes a hybrid model based on ResNet50 and swin transformer to directly capture long-range dependence,which fuses features through Cross Feature Modulation Module(CFMM).Experimental results on two publicly available datasets,Vaihingen and Potsdam,are mIoU of 70.27%and 76.63%,respectively.Thus,CFM-UNet can maintain a high segmentation performance compared with other competitive networks. 展开更多
关键词 remote sensing images semantic segmentation swin transformer feature modulation module
在线阅读 下载PDF
Boosting Whale Optimizer with Quasi-Oppositional Learning and Gaussian Barebone for Feature Selection and COVID-19 Image Segmentation 被引量:4
11
作者 Jie Xing Hanli Zhao +2 位作者 Huiling Chen Ruoxi Deng Lei Xiao 《Journal of Bionic Engineering》 SCIE EI CSCD 2023年第2期797-818,共22页
Whale optimization algorithm(WOA)tends to fall into the local optimum and fails to converge quickly in solving complex problems.To address the shortcomings,an improved WOA(QGBWOA)is proposed in this work.First,quasi-o... Whale optimization algorithm(WOA)tends to fall into the local optimum and fails to converge quickly in solving complex problems.To address the shortcomings,an improved WOA(QGBWOA)is proposed in this work.First,quasi-opposition-based learning is introduced to enhance the ability of WOA to search for optimal solutions.Second,a Gaussian barebone mechanism is embedded to promote diversity and expand the scope of the solution space in WOA.To verify the advantages of QGBWOA,comparison experiments between QGBWOA and its comparison peers were carried out on CEC 2014 with dimensions 10,30,50,and 100 and on CEC 2020 test with dimension 30.Furthermore,the performance results were tested using Wilcoxon signed-rank(WS),Friedman test,and post hoc statistical tests for statistical analysis.Convergence accuracy and speed are remarkably improved,as shown by experimental results.Finally,feature selection and multi-threshold image segmentation applications are demonstrated to validate the ability of QGBWOA to solve complex real-world problems.QGBWOA proves its superiority over compared algorithms in feature selection and multi-threshold image segmentation by performing several evaluation metrics. 展开更多
关键词 Whale optimization algorithm Quasi-opposition-based learning Gaussian barebone Image segmentation feature selection Bionic algorithm
在线阅读 下载PDF
AF-Net:A Medical Image Segmentation Network Based on Attention Mechanism and Feature Fusion 被引量:4
12
作者 Guimin Hou Jiaohua Qin +2 位作者 Xuyu Xiang Yun Tan Neal N.Xiong 《Computers, Materials & Continua》 SCIE EI 2021年第11期1877-1891,共15页
Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation ... Medical image segmentation is an important application field of computer vision in medical image processing.Due to the close location and high similarity of different organs in medical images,the current segmentation algorithms have problems with mis-segmentation and poor edge segmentation.To address these challenges,we propose a medical image segmentation network(AF-Net)based on attention mechanism and feature fusion,which can effectively capture global information while focusing the network on the object area.In this approach,we add dual attention blocks(DA-block)to the backbone network,which comprises parallel channels and spatial attention branches,to adaptively calibrate and weigh features.Secondly,the multi-scale feature fusion block(MFF-block)is proposed to obtain feature maps of different receptive domains and get multi-scale information with less computational consumption.Finally,to restore the locations and shapes of organs,we adopt the global feature fusion blocks(GFF-block)to fuse high-level and low-level information,which can obtain accurate pixel positioning.We evaluate our method on multiple datasets(the aorta and lungs dataset),and the experimental results achieve 94.0%in mIoU and 96.3%in DICE,showing that our approach performs better than U-Net and other state-of-art methods. 展开更多
关键词 Deep learning medical image segmentation feature fusion attention mechanism
在线阅读 下载PDF
A Feature Selection Strategy to Optimize Retinal Vasculature Segmentation 被引量:3
13
作者 Jose Escorcia-Gutierrez Jordina Torrents-Barrena +4 位作者 Margarita Gamarra Natasha Madera Pedro Romero-Aroca Aida Valls Domenec Puig 《Computers, Materials & Continua》 SCIE EI 2022年第2期2971-2989,共19页
Diabetic retinopathy (DR) is a complication of diabetesmellitus thatappears in the retina. Clinitians use retina images to detect DR pathologicalsigns related to the occlusion of tiny blood vessels. Such occlusion bri... Diabetic retinopathy (DR) is a complication of diabetesmellitus thatappears in the retina. Clinitians use retina images to detect DR pathologicalsigns related to the occlusion of tiny blood vessels. Such occlusion brings adegenerative cycle between the breaking off and the new generation of thinnerand weaker blood vessels. This research aims to develop a suitable retinalvasculature segmentation method for improving retinal screening proceduresby means of computer-aided diagnosis systems. The blood vessel segmentationmethodology relies on an effective feature selection based on SequentialForward Selection, using the error rate of a decision tree classifier in theevaluation function. Subsequently, the classification process is performed bythree alternative approaches: artificial neural networks, decision trees andsupport vector machines. The proposed methodology is validated on threepublicly accessible datasets and a private one provided by Hospital Sant Joanof Reus. In all cases we obtain an average accuracy above 96% with a sensitivityof 72% in the blood vessel segmentation process. Compared with the state-ofthe-art, our approach achieves the same performance as other methods thatneed more computational power.Our method significantly reduces the numberof features used in the segmentation process from 20 to 5 dimensions. Theimplementation of the three classifiers confirmed that the five selected featureshave a good effectiveness, independently of the classification algorithm. 展开更多
关键词 Diabetic retinopathy artificial neural networks decision trees support vector machines feature selection retinal vasculature segmentation
暂未订购
Integrating Audio-Visual Features and Text Information for Story Segmentation of News Video 被引量:1
14
作者 Liu Hua-yong, Zhou Dong-ru School of Computer,Wuhan University,Wuhan 430072, Hubei, China 《Wuhan University Journal of Natural Sciences》 CAS 2003年第04A期1070-1074,共5页
Video data are composed of multimodal information streams including visual, auditory and textual streams, so an approach of story segmentation for news video using multimodal analysis is described in this paper. The p... Video data are composed of multimodal information streams including visual, auditory and textual streams, so an approach of story segmentation for news video using multimodal analysis is described in this paper. The proposed approach detects the topic-caption frames, and integrates them with silence clips detection results, as well as shot segmentation results to locate the news story boundaries. The integration of audio-visual features and text information overcomes the weakness of the approach using only image analysis techniques. On test data with 135 400 frames, when the boundaries between news stories are detected, the accuracy rate 85.8% and the recall rate 97.5% are obtained. The experimental results show the approach is valid and robust. 展开更多
关键词 news video story segmentation audio-visual features analysis text detection
在线阅读 下载PDF
Guided-YNet: Saliency Feature-Guided Interactive Feature Enhancement Lung Tumor Segmentation Network 被引量:1
15
作者 Tao Zhou Yunfeng Pan +3 位作者 Huiling Lu Pei Dang Yujie Guo Yaxing Wang 《Computers, Materials & Continua》 SCIE EI 2024年第9期4813-4832,共20页
Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesio... Multimodal lung tumor medical images can provide anatomical and functional information for the same lesion.Such as Positron Emission Computed Tomography(PET),Computed Tomography(CT),and PET-CT.How to utilize the lesion anatomical and functional information effectively and improve the network segmentation performance are key questions.To solve the problem,the Saliency Feature-Guided Interactive Feature Enhancement Lung Tumor Segmentation Network(Guide-YNet)is proposed in this paper.Firstly,a double-encoder single-decoder U-Net is used as the backbone in this model,a single-coder single-decoder U-Net is used to generate the saliency guided feature using PET image and transmit it into the skip connection of the backbone,and the high sensitivity of PET images to tumors is used to guide the network to accurately locate lesions.Secondly,a Cross Scale Feature Enhancement Module(CSFEM)is designed to extract multi-scale fusion features after downsampling.Thirdly,a Cross-Layer Interactive Feature Enhancement Module(CIFEM)is designed in the encoder to enhance the spatial position information and semantic information.Finally,a Cross-Dimension Cross-Layer Feature Enhancement Module(CCFEM)is proposed in the decoder,which effectively extractsmultimodal image features through global attention and multi-dimension local attention.The proposed method is verified on the lung multimodal medical image datasets,and the results showthat theMean Intersection overUnion(MIoU),Accuracy(Acc),Dice Similarity Coefficient(Dice),Volumetric overlap error(Voe),Relative volume difference(Rvd)of the proposed method on lung lesion segmentation are 87.27%,93.08%,97.77%,95.92%,89.28%,and 88.68%,respectively.It is of great significance for computer-aided diagnosis. 展开更多
关键词 Medical image segmentation U-Net saliency feature guidance cross-modal feature enhancement cross-dimension feature enhancement
在线阅读 下载PDF
Improvement of Liver Segmentation by Combining High Order Statistical Texture Features with Anatomical Structural Features 被引量:2
16
作者 Suhuai Luo Xuechen Li Jiaming Li 《Engineering(科研)》 2013年第5期67-72,共6页
Automatic segmentation of liver in medical images is challenging on the aspects of accuracy, automation and robustness. A crucial stage of the liver segmentation is the selection of the image features for the segmenta... Automatic segmentation of liver in medical images is challenging on the aspects of accuracy, automation and robustness. A crucial stage of the liver segmentation is the selection of the image features for the segmentation. This paper presents an accurate liver segmentation algorithm. The approach starts with a texture analysis which results in an optimal set of texture features including high order statistical texture features and anatomical structural features. Then, it creates liver distribution image by classifying the original image pixelwisely using support vector machines. Lastly, it uses a group of morphological operations to locate the liver organ accurately in the image. The novelty of the approach is resided in the fact that the features are so selected that both local and global texture distributions are considered, which is important in liver organ segmentation where neighbouring tissues and organs have similar greyscale distributions. Experiment results of liver segmentation on CT images using the proposed method are presented with performance validation and discussion. 展开更多
关键词 LIVER segmentation TEXTURE feature Support VECTOR machine MORPHOLOGICAL Operation
在线阅读 下载PDF
Novel Facial Features Segmentation Algorithm
17
作者 姜微 沈庭芝 +1 位作者 王晓华 张健 《Journal of Beijing Institute of Technology》 EI CAS 2008年第4期478-483,共6页
An efficient algorithm for facial features extractions is proposed. The facial features we segment are the two eyes, nose and mouth. The algorithm is based on an improved Gabor wavelets edge detector, morphological ap... An efficient algorithm for facial features extractions is proposed. The facial features we segment are the two eyes, nose and mouth. The algorithm is based on an improved Gabor wavelets edge detector, morphological approach to detect the face region and facial features regions, and an improved T-shape face mask to locate the extract location of facial features. The experimental results show that the proposed method is robust against facial expression, illumination, and can be also effective if the person wearing glasses, and so on. 展开更多
关键词 facial feature segmentation Gabor wavelets morphological approach T-shape mask
在线阅读 下载PDF
Building Facade Point Clouds Segmentation Based on Optimal Dual-Scale Feature Descriptors 被引量:1
18
作者 Zijian Zhang Jicang Wu 《Journal of Computer and Communications》 2024年第6期226-245,共20页
To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-sca... To address the current issues of inaccurate segmentation and the limited applicability of segmentation methods for building facades in point clouds, we propose a facade segmentation algorithm based on optimal dual-scale feature descriptors. First, we select the optimal dual-scale descriptors from a range of feature descriptors. Next, we segment the facade according to the threshold value of the chosen optimal dual-scale descriptors. Finally, we use RANSAC (Random Sample Consensus) to fit the segmented surface and optimize the fitting result. Experimental results show that, compared to commonly used facade segmentation algorithms, the proposed method yields more accurate segmentation results, providing a robust data foundation for subsequent 3D model reconstruction of buildings. 展开更多
关键词 3D Laser Scanning Point Clouds Building Facade segmentation Point Cloud Processing feature Descriptors
在线阅读 下载PDF
A Method for Head-shoulder Segmentation and Human Facial Feature Positioning 被引量:1
19
作者 HuTianjian CaiDejun 《通信学报》 EI CSCD 北大核心 1998年第5期28-33,共6页
AMethodforHeadshoulderSegmentationandHumanFacialFeaturePositioningHuTianjianCaiDejunDepartmentofElectricalan... AMethodforHeadshoulderSegmentationandHumanFacialFeaturePositioningHuTianjianCaiDejunDepartmentofElectricalandInformationEngi... 展开更多
关键词 模型适应 边缘检测 图像编码 头肩分节 人面部特征定位
在线阅读 下载PDF
Hybrid Segmentation Scheme for Skin Features Extraction Using Dermoscopy Images
20
作者 Jehyeok Rew Hyungjoon Kim Eenjun Hwang 《Computers, Materials & Continua》 SCIE EI 2021年第10期801-817,共17页
Objective and quantitative assessment of skin conditions is essential for cosmeceutical studies and research on skin aging and skin regeneration.Various handcraft-based image processing methods have been proposed to e... Objective and quantitative assessment of skin conditions is essential for cosmeceutical studies and research on skin aging and skin regeneration.Various handcraft-based image processing methods have been proposed to evaluate skin conditions objectively,but they have unavoidable disadvantages when used to analyze skin features accurately.This study proposes a hybrid segmentation scheme consisting of Deeplab v3+with an Inception-ResNet-v2 backbone,LightGBM,and morphological processing(MP)to overcome the shortcomings of handcraft-based approaches.First,we apply Deeplab v3+with an Inception-ResNet-v2 backbone for pixel segmentation of skin wrinkles and cells.Then,LightGBM and MP are used to enhance the pixel segmentation quality.Finally,we determine several skin features based on the results of wrinkle and cell segmentation.Our proposed segmentation scheme achieved a mean accuracy of 0.854,mean of intersection over union of 0.749,and mean boundary F1 score of 0.852,which achieved 1.1%,6.7%,and 14.8%improvement over the panoptic-based semantic segmentation method,respectively. 展开更多
关键词 Image segmentation skin texture feature extraction dermoscopy image
在线阅读 下载PDF
上一页 1 2 199 下一页 到第
使用帮助 返回顶部