Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to ...Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations.展开更多
While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In t...While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation. Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy. Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames. The experimental results show the feasibility of our approach. Moreover, it is computationally more efficient than previous methods. It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm.展开更多
With the development of the modern information society, more and more multimedia information is available. So the technology of multimedia processing is becoming the important task for the irrelevant area of scientist...With the development of the modern information society, more and more multimedia information is available. So the technology of multimedia processing is becoming the important task for the irrelevant area of scientist. Among of the multimedia, the visual informarion is more attractive due to its direct, vivid characteristic, but at the same rime the huge amount of video data causes many challenges if the video storage, processing and transmission.展开更多
Segmentation of semantic Video Object Planes (VOP's) from video sequence is a key to the standard MPEG-4 with content-based video coding. In this paper, the approach of automatic Segmentation of VOP's Based on...Segmentation of semantic Video Object Planes (VOP's) from video sequence is a key to the standard MPEG-4 with content-based video coding. In this paper, the approach of automatic Segmentation of VOP's Based on Spatio-Temporal Information (SBSTI) is proposed.The proceeding results demonstrate the good performance of the algorithm.展开更多
Current mainstream unsupervised video object segmentation(UVOS) approaches typically incorporate optical flow as motion information to locate the primary objects in coherent video frames. However, they fuse appearance...Current mainstream unsupervised video object segmentation(UVOS) approaches typically incorporate optical flow as motion information to locate the primary objects in coherent video frames. However, they fuse appearance and motion information without evaluating the quality of the optical flow. When poor-quality optical flow is used for the interaction with the appearance information, it introduces significant noise and leads to a decline in overall performance. To alleviate this issue, we first employ a quality evaluation module(QEM) to evaluate the optical flow. Then, we select high-quality optical flow as motion cues to fuse with the appearance information, which can prevent poor-quality optical flow from diverting the network's attention. Moreover, we design an appearance-guided fusion module(AGFM) to better integrate appearance and motion information. Extensive experiments on several widely utilized datasets, including DAVIS-16, FBMS-59, and You Tube-Objects, demonstrate that the proposed method outperforms existing methods.展开更多
In order to detect the object in video efficiently, an automatic and real time video segmentation algorithm based on background model and color clustering is proposed. This algorithm consists of four phases: backgroun...In order to detect the object in video efficiently, an automatic and real time video segmentation algorithm based on background model and color clustering is proposed. This algorithm consists of four phases: background restoration, moving objects extract, moving objects region clustering and post processing. The threshold of the background restoration is not given in advanced. It can be gotten automatically. And a new object region cluster algorithm based on background model and color clustering to remove significance noise is proposed. An efficient method of eliminating shadow is also used. This approach was compared with other methods on pixel error ratio. The experiment result indicates the algorithm is correct and efficient.展开更多
A novel moving objects segmentation method is proposed in this paper. A modified three dimensional recursive search (3DRS) algorithm is used in order to obtain motion information accurately. A motion feature descrip...A novel moving objects segmentation method is proposed in this paper. A modified three dimensional recursive search (3DRS) algorithm is used in order to obtain motion information accurately. A motion feature descriptor (MFD) is designed to describe motion feature of each block in a picture based on motion intensity, motion in occlusion areas, and motion correlation among neighbouring blocks. Then, a fuzzy C-means clustering algorithm (FCM) is implemented based on those MFDs so as to segment moving objects. Moreover, a new parameter named as gathering degree is used to distinguish foreground moving objects and background motion. Experimental results demonstrate the effectiveness of the proposed method.展开更多
This paper proposes a motion-based region growing segmentation scheme for the object-based video coding, which segments an image into homogeneous regions characterized by a coherent motion. It adopts a block matching ...This paper proposes a motion-based region growing segmentation scheme for the object-based video coding, which segments an image into homogeneous regions characterized by a coherent motion. It adopts a block matching algorithm to estimate motion vectors and uses morphological tools such as open-close by reconstruction and the region-growing version of the watershed algorithm for spatial segmentation to improve the temporal segmentation. In order to determine the reliable motion vectors, this paper also proposes a change detection algorithm and a multi-candidate pro- screening motion estimation method. Preliminary simulation results demonstrate that the proposed scheme is feasible. The main advantage of the scheme is its low computational load.展开更多
Previous video object segmentation approachesmainly focus on simplex solutions linking appearance and motion,limiting effective feature collaboration between these two cues.In this work,we study a novel and efficient ...Previous video object segmentation approachesmainly focus on simplex solutions linking appearance and motion,limiting effective feature collaboration between these two cues.In this work,we study a novel and efficient full-duplex strategy network(FSNet)to address this issue,by considering a better mutual restraint scheme linking motion and appearance allowing exploitation of cross-modal features from the fusion and decoding stage.Specifically,we introduce a relational cross-attention module(RCAM)to achieve bidirectional message propagation across embedding sub-spaces.To improve the model’s robustness and update inconsistent features from the spatiotemporal embeddings,we adopt a bidirectional purification module after the RCAM.Extensive experiments on five popular benchmarks show that our FSNet is robust to various challenging scenarios(e.g.,motion blur and occlusion),and compares well to leading methods both for video object segmentation and video salient object detection.The project is publicly available at https://github.com/GewelsJI/FSNet.展开更多
We present a lightweight and efficient semisupervised video object segmentation network based on the space-time memory framework.To some extent,our method solves the two difficulties encountered in traditional video o...We present a lightweight and efficient semisupervised video object segmentation network based on the space-time memory framework.To some extent,our method solves the two difficulties encountered in traditional video object segmentation:one is that the single frame calculation time is too long,and the other is that the current frame’s segmentation should use more information from past frames.The algorithm uses a global context(GC)module to achieve highperformance,real-time segmentation.The GC module can effectively integrate multi-frame image information without increased memory and can process each frame in real time.Moreover,the prediction mask of the previous frame is helpful for the segmentation of the current frame,so we input it into a spatial constraint module(SCM),which constrains the areas of segments in the current frame.The SCM effectively alleviates mismatching of similar targets yet consumes few additional resources.We added a refinement module to the decoder to improve boundary segmentation.Our model achieves state-of-the-art results on various datasets,scoring 80.1%on YouTube-VOS 2018 and a J&F score of 78.0%on DAVIS 2017,while taking 0.05 s per frame on the DAVIS 2016 validation dataset.展开更多
现有无监督视频目标分割(Unsupervised Video Object Segmentation,UVOS)方法多采用像素级密集匹配策略,通过对齐融合多帧之间或单帧与光流之间的信息来提升模型性能.然而,在遮挡、相机抖动、运动模糊等挑战性场景中,光流估计误差易产...现有无监督视频目标分割(Unsupervised Video Object Segmentation,UVOS)方法多采用像素级密集匹配策略,通过对齐融合多帧之间或单帧与光流之间的信息来提升模型性能.然而,在遮挡、相机抖动、运动模糊等挑战性场景中,光流估计误差易产生大量错误匹配,导致融合后的时空表征易过拟合运动噪声.为此,本文提出一种运动提示引导的自适应学习UVOS框架.通过设计一种无监督光流提示生成算法,将光流编码的密集运动信息转换为稀疏点和框提示,借助提示学习引导分割一切模型(Segment Anything Model,SAM)通过本文设计的两个轻量级适配器来自适应学习,从而获得更为鲁棒的时空表征,增强模型的抗噪能力.为获得有效的提示,设计了一种无监督运动提示生成算法.该算法基于光流特征计算一系列统计量,筛选出显著区域,再利用运动边缘信息去除伪显著区域的干扰,并设定自适应阈值进行过滤,生成提示显著运动目标所在区域的点和框坐标.为提升SAM在下游UVOS任务中的泛化性,提出一种自适应表征学习SAM模型.通过设计两个轻量级特征适配器,从SAM的通用知识库中自适应学习与下游UVOS任务相关的知识,以准确地粗定位目标.针对SAM基于纯Transformer架构在细节处理上的不足,基于卷积神经网络(Convolutional Neural Networks,CNN)架构设计了表观聚焦细化模块.由SAM得到的定位注意力图渐进式地引导细化过程,使模型的注意力从全局粗定位聚焦到局部细化,最终得到更加精确的分割掩码.本文方法在DAVIS16(DAVIS 2016)、FBMS(Financial and Business Management System)和YTOBJ(YouTube-OBJects)三个主流数据集上进行了充分验证.结果表明:本文方法在区域相似度指标上较当前先进方法分别提升了1.8%、1.6%和2.6%,充分表明了本文方法的有效性.展开更多
Moving object segmentation(MOS),aiming at segmenting moving objects from video frames,is an important and challenging task in computer vision and with various applications.With the development of deep learning(DL),MOS...Moving object segmentation(MOS),aiming at segmenting moving objects from video frames,is an important and challenging task in computer vision and with various applications.With the development of deep learning(DL),MOS has also entered the era of deep models toward spatiotemporal feature learning.This paper aims to provide the latest review of recent DL-based MOS methods proposed during the past three years.Specifically,we present a more up-to-date categorization based on model characteristics,then compare and discuss each category from feature learning(FL),and model training and evaluation perspectives.For FL,the methods reviewed are divided into three types:spatial FL,temporal FL,and spatiotemporal FL,then analyzed from input and model architectures aspects,three input types,and four typical preprocessing subnetworks are summarized.In terms of training,we discuss ideas for enhancing model transferability.In terms of evaluation,based on a previous categorization of scene dependent evaluation and scene independent evaluation,and combined with whether used videos are recorded with static or moving cameras,we further provide four subdivided evaluation setups and analyze that of reviewed methods.We also show performance comparisons of some reviewed MOS methods and analyze the advantages and disadvantages of reviewed MOS methods in terms of technology.Finally,based on the above comparisons and discussions,we present research prospects and future directions.展开更多
针对半监督视频目标分割(VOS)领域中基于记忆的方法存在由于目标交互造成的物体遮挡以及背景中类似对象或噪声的干扰等问题,提出一种基于时空解耦和区域鲁棒性增强的半监督VOS方法。首先,构建一个结构化Transformer架构去除所有像素共...针对半监督视频目标分割(VOS)领域中基于记忆的方法存在由于目标交互造成的物体遮挡以及背景中类似对象或噪声的干扰等问题,提出一种基于时空解耦和区域鲁棒性增强的半监督VOS方法。首先,构建一个结构化Transformer架构去除所有像素共有的特征信息,突出每个像素之间的差异,深入挖掘视频帧中目标的关键特征;其次,解耦当前帧与长期记忆帧之间的相似性,区分为时空相关性和目标重要性2个关键维度,使得对像素级时空特征和目标特征的分析更精确,从而解决由目标交互造成的物体遮挡问题;最后,设计一个区域条形注意力(RSA)模块,利用长期记忆中的目标位置信息增强对前景区域的关注度并抑制背景噪声。实验结果表明,所提方法在DAVIS 2017验证集上比重新训练的AOT(Associating Objects with Transformers)模型的J&F指标高1.7个百分点,在YouTube-VOS2019验证集上比重新训练的AOT模型的总分高1.6个百分点。可见所提方法可有效解决半监督VOS存在的问题。展开更多
This paper proposes a motion-based region growing segmentation scheme, which incorporatesluminance and motion information simultaneously and uses morphological tools such as open-close byreconstruction and the region-...This paper proposes a motion-based region growing segmentation scheme, which incorporatesluminance and motion information simultaneously and uses morphological tools such as open-close byreconstruction and the region-growing version of the watershed algorithm. The main advantage of this scheme is thatthe resultant objects ore characterized by a coherent motion and foe moving object boundaries are precisely located.Simulation results demonstrate the effiency of the Proposed scheme.展开更多
基金supported in part by the National Key R&D Program of China(2017YFB0502904)the National Science Foundation of China(61876140)。
文摘Recently,video object segmentation has received great attention in the computer vision community.Most of the existing methods heavily rely on the pixel-wise human annotations,which are expensive and time-consuming to obtain.To tackle this problem,we make an early attempt to achieve video object segmentation with scribble-level supervision,which can alleviate large amounts of human labor for collecting the manual annotation.However,using conventional network architectures and learning objective functions under this scenario cannot work well as the supervision information is highly sparse and incomplete.To address this issue,this paper introduces two novel elements to learn the video object segmentation model.The first one is the scribble attention module,which captures more accurate context information and learns an effective attention map to enhance the contrast between foreground and background.The other one is the scribble-supervised loss,which can optimize the unlabeled pixels and dynamically correct inaccurate segmented areas during the training stage.To evaluate the proposed method,we implement experiments on two video object segmentation benchmark datasets,You Tube-video object segmentation(VOS),and densely annotated video segmentation(DAVIS)-2017.We first generate the scribble annotations from the original per-pixel annotations.Then,we train our model and compare its test performance with the baseline models and other existing works.Extensive experiments demonstrate that the proposed method can work effectively and approach to the methods requiring the dense per-pixel annotations.
文摘While the development of particular video segmentation algorithms has attracted considerable research interest, relatively little effort has been devoted to provide a methodology for evaluating their performance. In this paper, we propose a methodology to objectively evaluate video segmentation algorithm with ground-truth, which is based on computing the deviation of segmentation results from the reference segmentation. Four different metrics based on classification pixels, edges, relative foreground area and relative position respectively are combined to address the spatial accuracy. Temporal coherency is evaluated by utilizing the difference of spatial accuracy between successive frames. The experimental results show the feasibility of our approach. Moreover, it is computationally more efficient than previous methods. It can be applied to provide an offline ranking among different segmentation algorithms and to optimally set the parameters for a given algorithm.
文摘With the development of the modern information society, more and more multimedia information is available. So the technology of multimedia processing is becoming the important task for the irrelevant area of scientist. Among of the multimedia, the visual informarion is more attractive due to its direct, vivid characteristic, but at the same rime the huge amount of video data causes many challenges if the video storage, processing and transmission.
文摘Segmentation of semantic Video Object Planes (VOP's) from video sequence is a key to the standard MPEG-4 with content-based video coding. In this paper, the approach of automatic Segmentation of VOP's Based on Spatio-Temporal Information (SBSTI) is proposed.The proceeding results demonstrate the good performance of the algorithm.
基金supported by the National Natural Science Foundation of China (No.61872189)。
文摘Current mainstream unsupervised video object segmentation(UVOS) approaches typically incorporate optical flow as motion information to locate the primary objects in coherent video frames. However, they fuse appearance and motion information without evaluating the quality of the optical flow. When poor-quality optical flow is used for the interaction with the appearance information, it introduces significant noise and leads to a decline in overall performance. To alleviate this issue, we first employ a quality evaluation module(QEM) to evaluate the optical flow. Then, we select high-quality optical flow as motion cues to fuse with the appearance information, which can prevent poor-quality optical flow from diverting the network's attention. Moreover, we design an appearance-guided fusion module(AGFM) to better integrate appearance and motion information. Extensive experiments on several widely utilized datasets, including DAVIS-16, FBMS-59, and You Tube-Objects, demonstrate that the proposed method outperforms existing methods.
基金the Ministerial Level Advanced Research Foundation(10405033)
文摘In order to detect the object in video efficiently, an automatic and real time video segmentation algorithm based on background model and color clustering is proposed. This algorithm consists of four phases: background restoration, moving objects extract, moving objects region clustering and post processing. The threshold of the background restoration is not given in advanced. It can be gotten automatically. And a new object region cluster algorithm based on background model and color clustering to remove significance noise is proposed. An efficient method of eliminating shadow is also used. This approach was compared with other methods on pixel error ratio. The experiment result indicates the algorithm is correct and efficient.
基金Supported by the National Natural Science Foundation of China (No. 60772134, 60902081, 60902052) the 111 Project (No.B08038) the Fundamental Research Funds for the Central Universities(No.72105457).
文摘A novel moving objects segmentation method is proposed in this paper. A modified three dimensional recursive search (3DRS) algorithm is used in order to obtain motion information accurately. A motion feature descriptor (MFD) is designed to describe motion feature of each block in a picture based on motion intensity, motion in occlusion areas, and motion correlation among neighbouring blocks. Then, a fuzzy C-means clustering algorithm (FCM) is implemented based on those MFDs so as to segment moving objects. Moreover, a new parameter named as gathering degree is used to distinguish foreground moving objects and background motion. Experimental results demonstrate the effectiveness of the proposed method.
文摘This paper proposes a motion-based region growing segmentation scheme for the object-based video coding, which segments an image into homogeneous regions characterized by a coherent motion. It adopts a block matching algorithm to estimate motion vectors and uses morphological tools such as open-close by reconstruction and the region-growing version of the watershed algorithm for spatial segmentation to improve the temporal segmentation. In order to determine the reliable motion vectors, this paper also proposes a change detection algorithm and a multi-candidate pro- screening motion estimation method. Preliminary simulation results demonstrate that the proposed scheme is feasible. The main advantage of the scheme is its low computational load.
基金This work was supported by the National Natural Science Foundation of China(62176169,61703077,and 62102207).
文摘Previous video object segmentation approachesmainly focus on simplex solutions linking appearance and motion,limiting effective feature collaboration between these two cues.In this work,we study a novel and efficient full-duplex strategy network(FSNet)to address this issue,by considering a better mutual restraint scheme linking motion and appearance allowing exploitation of cross-modal features from the fusion and decoding stage.Specifically,we introduce a relational cross-attention module(RCAM)to achieve bidirectional message propagation across embedding sub-spaces.To improve the model’s robustness and update inconsistent features from the spatiotemporal embeddings,we adopt a bidirectional purification module after the RCAM.Extensive experiments on five popular benchmarks show that our FSNet is robust to various challenging scenarios(e.g.,motion blur and occlusion),and compares well to leading methods both for video object segmentation and video salient object detection.The project is publicly available at https://github.com/GewelsJI/FSNet.
基金partially supported by the National Natural Science Foundation of China(Grant Nos.61802197,62072449,and 61632003)the Science and Technology Development Fund,Macao SAR(Grant Nos.0018/2019/AKP and SKL-IOTSC(UM)-2021-2023)+1 种基金the Guangdong Science and Technology Department(Grant No.2020B1515130001)University of Macao(Grant Nos.MYRG2020-00253-FST and MYRG2022-00059-FST).
文摘We present a lightweight and efficient semisupervised video object segmentation network based on the space-time memory framework.To some extent,our method solves the two difficulties encountered in traditional video object segmentation:one is that the single frame calculation time is too long,and the other is that the current frame’s segmentation should use more information from past frames.The algorithm uses a global context(GC)module to achieve highperformance,real-time segmentation.The GC module can effectively integrate multi-frame image information without increased memory and can process each frame in real time.Moreover,the prediction mask of the previous frame is helpful for the segmentation of the current frame,so we input it into a spatial constraint module(SCM),which constrains the areas of segments in the current frame.The SCM effectively alleviates mismatching of similar targets yet consumes few additional resources.We added a refinement module to the decoder to improve boundary segmentation.Our model achieves state-of-the-art results on various datasets,scoring 80.1%on YouTube-VOS 2018 and a J&F score of 78.0%on DAVIS 2017,while taking 0.05 s per frame on the DAVIS 2016 validation dataset.
文摘现有无监督视频目标分割(Unsupervised Video Object Segmentation,UVOS)方法多采用像素级密集匹配策略,通过对齐融合多帧之间或单帧与光流之间的信息来提升模型性能.然而,在遮挡、相机抖动、运动模糊等挑战性场景中,光流估计误差易产生大量错误匹配,导致融合后的时空表征易过拟合运动噪声.为此,本文提出一种运动提示引导的自适应学习UVOS框架.通过设计一种无监督光流提示生成算法,将光流编码的密集运动信息转换为稀疏点和框提示,借助提示学习引导分割一切模型(Segment Anything Model,SAM)通过本文设计的两个轻量级适配器来自适应学习,从而获得更为鲁棒的时空表征,增强模型的抗噪能力.为获得有效的提示,设计了一种无监督运动提示生成算法.该算法基于光流特征计算一系列统计量,筛选出显著区域,再利用运动边缘信息去除伪显著区域的干扰,并设定自适应阈值进行过滤,生成提示显著运动目标所在区域的点和框坐标.为提升SAM在下游UVOS任务中的泛化性,提出一种自适应表征学习SAM模型.通过设计两个轻量级特征适配器,从SAM的通用知识库中自适应学习与下游UVOS任务相关的知识,以准确地粗定位目标.针对SAM基于纯Transformer架构在细节处理上的不足,基于卷积神经网络(Convolutional Neural Networks,CNN)架构设计了表观聚焦细化模块.由SAM得到的定位注意力图渐进式地引导细化过程,使模型的注意力从全局粗定位聚焦到局部细化,最终得到更加精确的分割掩码.本文方法在DAVIS16(DAVIS 2016)、FBMS(Financial and Business Management System)和YTOBJ(YouTube-OBJects)三个主流数据集上进行了充分验证.结果表明:本文方法在区域相似度指标上较当前先进方法分别提升了1.8%、1.6%和2.6%,充分表明了本文方法的有效性.
基金National Natural Science Foundation of China(Nos.61702323 and 62172268)the Shanghai Municipal Natural Science Foundation,China(No.20ZR1423100)+2 种基金the Open Fund of Science and Technology on Thermal Energy and Power Laboratory(No.TPL2020C02)Wuhan 2nd Ship Design and Research Institute,Wuhan,China,the National Key Research and Development Program of China(No.2018YFB1306303)the Major Basic Research Projects of Natural Science Foundation of Shandong Province,China(No.ZR2019ZD07).
文摘Moving object segmentation(MOS),aiming at segmenting moving objects from video frames,is an important and challenging task in computer vision and with various applications.With the development of deep learning(DL),MOS has also entered the era of deep models toward spatiotemporal feature learning.This paper aims to provide the latest review of recent DL-based MOS methods proposed during the past three years.Specifically,we present a more up-to-date categorization based on model characteristics,then compare and discuss each category from feature learning(FL),and model training and evaluation perspectives.For FL,the methods reviewed are divided into three types:spatial FL,temporal FL,and spatiotemporal FL,then analyzed from input and model architectures aspects,three input types,and four typical preprocessing subnetworks are summarized.In terms of training,we discuss ideas for enhancing model transferability.In terms of evaluation,based on a previous categorization of scene dependent evaluation and scene independent evaluation,and combined with whether used videos are recorded with static or moving cameras,we further provide four subdivided evaluation setups and analyze that of reviewed methods.We also show performance comparisons of some reviewed MOS methods and analyze the advantages and disadvantages of reviewed MOS methods in terms of technology.Finally,based on the above comparisons and discussions,we present research prospects and future directions.
文摘针对半监督视频目标分割(VOS)领域中基于记忆的方法存在由于目标交互造成的物体遮挡以及背景中类似对象或噪声的干扰等问题,提出一种基于时空解耦和区域鲁棒性增强的半监督VOS方法。首先,构建一个结构化Transformer架构去除所有像素共有的特征信息,突出每个像素之间的差异,深入挖掘视频帧中目标的关键特征;其次,解耦当前帧与长期记忆帧之间的相似性,区分为时空相关性和目标重要性2个关键维度,使得对像素级时空特征和目标特征的分析更精确,从而解决由目标交互造成的物体遮挡问题;最后,设计一个区域条形注意力(RSA)模块,利用长期记忆中的目标位置信息增强对前景区域的关注度并抑制背景噪声。实验结果表明,所提方法在DAVIS 2017验证集上比重新训练的AOT(Associating Objects with Transformers)模型的J&F指标高1.7个百分点,在YouTube-VOS2019验证集上比重新训练的AOT模型的总分高1.6个百分点。可见所提方法可有效解决半监督VOS存在的问题。
文摘This paper proposes a motion-based region growing segmentation scheme, which incorporatesluminance and motion information simultaneously and uses morphological tools such as open-close byreconstruction and the region-growing version of the watershed algorithm. The main advantage of this scheme is thatthe resultant objects ore characterized by a coherent motion and foe moving object boundaries are precisely located.Simulation results demonstrate the effiency of the Proposed scheme.