期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Fish Density Estimation with Multi-Scale Context Enhanced Convolutional Neural Network 被引量:3
1
作者 Yizhi Zhou Hong Yu +3 位作者 Junfeng Wu Zhen Cui Hongshuai Pang Fangyan Zhang 《Journal of Communications and Information Networks》 CSCD 2019年第3期80-88,共9页
With the development of fishery industry,accurate estimation of the number of fish in aquaculture waters is of great importance to fish behavior analysis,bait feeding and fishery resource investigation.In this paper,w... With the development of fishery industry,accurate estimation of the number of fish in aquaculture waters is of great importance to fish behavior analysis,bait feeding and fishery resource investigation.In this paper,we propose a method for fish density estimation based on the multi-scale context enhanced convolutional network,which could map a fish school image taken at any angle to a density map,and calculate the number of fish in the image finally.In order to eliminate the influence of camera perspective effect and image resolution on density estimation,multi-scale filters are utilized in a convolutional neural network to process fish image in parallel.And then,the context enhancement module is merged in the network structure to help the network understand the global context information of the image.Finally,different feature maps are merged together to construct the density map of fish school images,and finally get the number of fish in the image.In order to make the effectiveness of our method valid,we test the proposed method on DlouDataset.The results show that the proposed method has lower mean square error and mean absolute error,which is helpful to improve the accuracy of the fish counting in dense fish school images. 展开更多
关键词 fish counting density estimation neural network context enhancement module
原文传递
Global video object segmentation with spatial constraint module
2
作者 Yadang Chen Duolin Wang +2 位作者 Zhiguo Chen Zhi-Xin Yang Enhua Wu 《Computational Visual Media》 SCIE EI CSCD 2023年第2期385-400,共16页
We present a lightweight and efficient semisupervised video object segmentation network based on the space-time memory framework.To some extent,our method solves the two difficulties encountered in traditional video o... We present a lightweight and efficient semisupervised video object segmentation network based on the space-time memory framework.To some extent,our method solves the two difficulties encountered in traditional video object segmentation:one is that the single frame calculation time is too long,and the other is that the current frame’s segmentation should use more information from past frames.The algorithm uses a global context(GC)module to achieve highperformance,real-time segmentation.The GC module can effectively integrate multi-frame image information without increased memory and can process each frame in real time.Moreover,the prediction mask of the previous frame is helpful for the segmentation of the current frame,so we input it into a spatial constraint module(SCM),which constrains the areas of segments in the current frame.The SCM effectively alleviates mismatching of similar targets yet consumes few additional resources.We added a refinement module to the decoder to improve boundary segmentation.Our model achieves state-of-the-art results on various datasets,scoring 80.1%on YouTube-VOS 2018 and a J&F score of 78.0%on DAVIS 2017,while taking 0.05 s per frame on the DAVIS 2016 validation dataset. 展开更多
关键词 video object segmentation semantic segmentation global context(GC)module spatial constraint
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部