Referring image segmentation aims to segment the referent with natural linguistic expressions.Due to the distinct modality properties of the image and language,it is challenging to effectively align token embeddings w...Referring image segmentation aims to segment the referent with natural linguistic expressions.Due to the distinct modality properties of the image and language,it is challenging to effectively align token embeddings with visual regions.Different from existing methods of coordinate linguistics for the specific visual region,we propose a novel referring image segmentation paradigm,language interprets vision(LIV),which densely fine-grained aligns the visual and linguistic modalities,and fuse the multi-modal biases effectively.LIV resorts to re-encoding visual features on compositional dimensions of<Height,Width,Channel>,which interprets vision through linguistic expression and makes cross-modality alignment denser.More specifically,we innovatively consider the adjacency of visual regions on the channel level to promote channel semantic consistency and propagate fine-grained semantics in the whole segmentation procedure.In addition,we also theoretically analyze that LIV effectively enriches the representation space and makes the comprehensive modality-fused biases more generalized,which boosts the precision of mask prediction.Extensive experimental results on three benchmarks validate that our proposed framework significantly outperforms other methods by a remarkable margin.展开更多
In referring segmentation,modeling the complicated constraints in the multimodal information is one of the most challenging problems.As the information in a given language expression and image becomes increasingly abu...In referring segmentation,modeling the complicated constraints in the multimodal information is one of the most challenging problems.As the information in a given language expression and image becomes increasingly abundant,most of the current one-stage methods that directly output the segmentation mask encounter difficulties in understanding the complicated relationships between the image and the expression.In this work,we propose a PrimitiveNet to decompose the difficult global constraints into a set of simple primitives.Each primitive produces a primitive mask that represents only simple semantic meanings,e.g.,all instances from the same category.Then,the output segmentation mask is computed by selectively combining these primitives according to the language expression.Furthermore,we propose a cross-primitive attention(CPA)module and a language-primitive attention(LPA)module to exchange information among all primitives and the language expression,respectively.The proposed CPA and LPA help the network find appropriate weights for primitive masks,so as to recover the target object.Extensive experiments have proven the effectiveness of our design and verified that the proposed network outperforms current state-of-the-art referring segmentation methods on three RefCOCO datasets.展开更多
基金supported by the Zhejiang Provincial Department of Transport Science and Technology Plan Project-Research on Evaluation Technology of Traffic Flow Thunder Vision Fusion Perception System(Grant No.202209)Zhejiang Provincial Department of Science and Technology Public Welfare Project-Research on Vehicle Trajectory Data Quality Evaluation Technology based on Radar-Vision Integrated Equipment(Grant No.LGC22E080003).
文摘Referring image segmentation aims to segment the referent with natural linguistic expressions.Due to the distinct modality properties of the image and language,it is challenging to effectively align token embeddings with visual regions.Different from existing methods of coordinate linguistics for the specific visual region,we propose a novel referring image segmentation paradigm,language interprets vision(LIV),which densely fine-grained aligns the visual and linguistic modalities,and fuse the multi-modal biases effectively.LIV resorts to re-encoding visual features on compositional dimensions of<Height,Width,Channel>,which interprets vision through linguistic expression and makes cross-modality alignment denser.More specifically,we innovatively consider the adjacency of visual regions on the channel level to promote channel semantic consistency and propagate fine-grained semantics in the whole segmentation procedure.In addition,we also theoretically analyze that LIV effectively enriches the representation space and makes the comprehensive modality-fused biases more generalized,which boosts the precision of mask prediction.Extensive experimental results on three benchmarks validate that our proposed framework significantly outperforms other methods by a remarkable margin.
基金partially supported by the research grant of NTU Presidential Postdoctoral Fellowship.
文摘In referring segmentation,modeling the complicated constraints in the multimodal information is one of the most challenging problems.As the information in a given language expression and image becomes increasingly abundant,most of the current one-stage methods that directly output the segmentation mask encounter difficulties in understanding the complicated relationships between the image and the expression.In this work,we propose a PrimitiveNet to decompose the difficult global constraints into a set of simple primitives.Each primitive produces a primitive mask that represents only simple semantic meanings,e.g.,all instances from the same category.Then,the output segmentation mask is computed by selectively combining these primitives according to the language expression.Furthermore,we propose a cross-primitive attention(CPA)module and a language-primitive attention(LPA)module to exchange information among all primitives and the language expression,respectively.The proposed CPA and LPA help the network find appropriate weights for primitive masks,so as to recover the target object.Extensive experiments have proven the effectiveness of our design and verified that the proposed network outperforms current state-of-the-art referring segmentation methods on three RefCOCO datasets.