期刊文献+

基于VGG16预编码的遥感图像建筑物语义分割 被引量:29

Buildings Segmentation of Remote Sensing Images Based on VGG16 Pre-encoding
在线阅读 下载PDF
导出
摘要 深度卷积神经网络在遥感图像语义分割研究上开创了新的领域。利用改进的U-net模型对建筑物区域进行像素级提取,可获取其轮廓和尺寸信息。利用强可迁移性的VGG16网络作为U-net模型的编码器,并利用基于空洞卷积的级联并行模块提取多尺度的高层语义信息,同时使用转置卷积实现上采样,逐步还原分割细节。实验采用了加权组合的Jaccard损失和二元交叉熵损失作为总损失函数。实验结果表明了改进的U-net模型对遥感图像中建筑物的分割提取具有更高的精度,均像素精度(MPA)、均交并比(MIoU)和F1分数分别为92. 16%、78. 55%和84. 81%。改进模型的F1分数比Deep Labv3+模型高4. 8%,比标准U-net模型高8. 3%。 Deep convolution neural network has opened up a new field in the study of semantic segmentation of remote sensing images. An improved U-net model was proposed to extract building areas at the pixel level so that building’s contour and size information could be obtained. The VGG16 network with strong transferability was used as the encoder of U-net,and the multi-scale and high-level semantic information was extracted by a cascaded and parallel module based on atrous convolution,and the segmentation details were gradually restored through upsampling which uses the transpose convolution. The weighted combination of jaccard loss and binary cross-entropy loss was used as the total loss function in the experiment. Experimental results show that the improved U-net model has high accuracy for the buildings segmentation and extraction in remote sensing images,and the mean pixel accuracy( MPA),the mean intersection of union( MIoU) and the F1 scores respectively are 92. 16%,78. 55% and84. 81%. The F1 score of our improved model is 4. 8% higher than DeepLabv3 + model and 8. 3% higher than the standard U-net model.
作者 徐昭洪 刘宇 全吉成 吴晨 XU Zhao-hong;LIU Yu;QUAN Ji-cheng;WU Chen(Laboratory of Digital Earth Science,Aviation University of Air Force,Changchun 130022,China)
出处 《科学技术与工程》 北大核心 2019年第17期250-255,共6页 Science Technology and Engineering
关键词 遥感图像 语义分割 U-net 建筑物分割 Jaccard指数 remote sensing images semantic segmentation U-net building segmentation Jaccard index
  • 相关文献

同被引文献279

引证文献29

二级引证文献176

相关作者

内容加载中请稍等...

相关机构

内容加载中请稍等...

相关主题

内容加载中请稍等...

浏览历史

内容加载中请稍等...
;
使用帮助 返回顶部