Semantic segmentation of remote sensing images is a critical research area in the field of remote sensing.Despite the success of Convolutional Neural Networks(CNNs),they often fail to capture inter-layer feature relat...Semantic segmentation of remote sensing images is a critical research area in the field of remote sensing.Despite the success of Convolutional Neural Networks(CNNs),they often fail to capture inter-layer feature relationships and fully leverage contextual information,leading to the loss of important details.Additionally,due to significant intraclass variation and small inter-class differences in remote sensing images,CNNs may experience class confusion.To address these issues,we propose a novel Category-Guided Feature Collaborative Learning Network(CG-FCLNet),which enables fine-grained feature extraction and adaptive fusion.Specifically,we design a Feature Collaborative Learning Module(FCLM)to facilitate the tight interaction of multi-scale features.We also introduce a Scale-Aware Fusion Module(SAFM),which iteratively fuses features from different layers using a spatial attention mechanism,enabling deeper feature fusion.Furthermore,we design a Category-Guided Module(CGM)to extract category-aware information that guides feature fusion,ensuring that the fused featuresmore accurately reflect the semantic information of each category,thereby improving detailed segmentation.The experimental results show that CG-FCLNet achieves a Mean Intersection over Union(mIoU)of 83.46%,an mF1 of 90.87%,and an Overall Accuracy(OA)of 91.34% on the Vaihingen dataset.On the Potsdam dataset,it achieves a mIoU of 86.54%,an mF1 of 92.65%,and an OA of 91.29%.These results highlight the superior performance of CG-FCLNet compared to existing state-of-the-art methods.展开更多
基金funded by National Natural Science Foundation of China(61603245).
文摘Semantic segmentation of remote sensing images is a critical research area in the field of remote sensing.Despite the success of Convolutional Neural Networks(CNNs),they often fail to capture inter-layer feature relationships and fully leverage contextual information,leading to the loss of important details.Additionally,due to significant intraclass variation and small inter-class differences in remote sensing images,CNNs may experience class confusion.To address these issues,we propose a novel Category-Guided Feature Collaborative Learning Network(CG-FCLNet),which enables fine-grained feature extraction and adaptive fusion.Specifically,we design a Feature Collaborative Learning Module(FCLM)to facilitate the tight interaction of multi-scale features.We also introduce a Scale-Aware Fusion Module(SAFM),which iteratively fuses features from different layers using a spatial attention mechanism,enabling deeper feature fusion.Furthermore,we design a Category-Guided Module(CGM)to extract category-aware information that guides feature fusion,ensuring that the fused featuresmore accurately reflect the semantic information of each category,thereby improving detailed segmentation.The experimental results show that CG-FCLNet achieves a Mean Intersection over Union(mIoU)of 83.46%,an mF1 of 90.87%,and an Overall Accuracy(OA)of 91.34% on the Vaihingen dataset.On the Potsdam dataset,it achieves a mIoU of 86.54%,an mF1 of 92.65%,and an OA of 91.29%.These results highlight the superior performance of CG-FCLNet compared to existing state-of-the-art methods.