Referring image segmentation aims to segment the referent with natural linguistic expressions.Due to the distinct modality properties of the image and language,it is challenging to effectively align token embeddings w...Referring image segmentation aims to segment the referent with natural linguistic expressions.Due to the distinct modality properties of the image and language,it is challenging to effectively align token embeddings with visual regions.Different from existing methods of coordinate linguistics for the specific visual region,we propose a novel referring image segmentation paradigm,language interprets vision(LIV),which densely fine-grained aligns the visual and linguistic modalities,and fuse the multi-modal biases effectively.LIV resorts to re-encoding visual features on compositional dimensions of<Height,Width,Channel>,which interprets vision through linguistic expression and makes cross-modality alignment denser.More specifically,we innovatively consider the adjacency of visual regions on the channel level to promote channel semantic consistency and propagate fine-grained semantics in the whole segmentation procedure.In addition,we also theoretically analyze that LIV effectively enriches the representation space and makes the comprehensive modality-fused biases more generalized,which boosts the precision of mask prediction.Extensive experimental results on three benchmarks validate that our proposed framework significantly outperforms other methods by a remarkable margin.展开更多
End-to-end text spotting is a vital computer vision task that aims to integrate scene text detection and recognition into a unified framework.Typical methods heavily rely on region-of-interest(Rol)operations to extrac...End-to-end text spotting is a vital computer vision task that aims to integrate scene text detection and recognition into a unified framework.Typical methods heavily rely on region-of-interest(Rol)operations to extract local features and complex post-processing steps to produce final predictions.To address these limitations,we propose TextFormer,a query-based end-to-end text spotter with a transformer architecture.Specifically,using query embedding per text instance,TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multitask modeling.It allows for mutual training and optimization of classification,segmentation and recognition branches,resulting in deeper feature sharing without sacrificing flexibility or simplicity.Additionally,we design an adaptive global aggregation(AGG)module to transfer global features into sequential features for reading arbitrarilyshaped texts,which overcomes the suboptimization problem of Rol operations.Furthermore,potential corpus information is utilized from weak annotations to full labels through mixed supervision,further improving text detection and end-to-end text spotting results.Extensive experiments on various bilingual(i.e.,English and Chinese)benchmarks demonstrate the superiority of our method.Especially on the TDA-ReCTS dataset,TextFormer surpasses the state-of-the-art method in terms of 1-NED by 13.2%.展开更多
基金supported by the Zhejiang Provincial Department of Transport Science and Technology Plan Project-Research on Evaluation Technology of Traffic Flow Thunder Vision Fusion Perception System(Grant No.202209)Zhejiang Provincial Department of Science and Technology Public Welfare Project-Research on Vehicle Trajectory Data Quality Evaluation Technology based on Radar-Vision Integrated Equipment(Grant No.LGC22E080003).
文摘Referring image segmentation aims to segment the referent with natural linguistic expressions.Due to the distinct modality properties of the image and language,it is challenging to effectively align token embeddings with visual regions.Different from existing methods of coordinate linguistics for the specific visual region,we propose a novel referring image segmentation paradigm,language interprets vision(LIV),which densely fine-grained aligns the visual and linguistic modalities,and fuse the multi-modal biases effectively.LIV resorts to re-encoding visual features on compositional dimensions of<Height,Width,Channel>,which interprets vision through linguistic expression and makes cross-modality alignment denser.More specifically,we innovatively consider the adjacency of visual regions on the channel level to promote channel semantic consistency and propagate fine-grained semantics in the whole segmentation procedure.In addition,we also theoretically analyze that LIV effectively enriches the representation space and makes the comprehensive modality-fused biases more generalized,which boosts the precision of mask prediction.Extensive experimental results on three benchmarks validate that our proposed framework significantly outperforms other methods by a remarkable margin.
基金supported by the National Natural Science Foundation of China(No.61902027).
文摘End-to-end text spotting is a vital computer vision task that aims to integrate scene text detection and recognition into a unified framework.Typical methods heavily rely on region-of-interest(Rol)operations to extract local features and complex post-processing steps to produce final predictions.To address these limitations,we propose TextFormer,a query-based end-to-end text spotter with a transformer architecture.Specifically,using query embedding per text instance,TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multitask modeling.It allows for mutual training and optimization of classification,segmentation and recognition branches,resulting in deeper feature sharing without sacrificing flexibility or simplicity.Additionally,we design an adaptive global aggregation(AGG)module to transfer global features into sequential features for reading arbitrarilyshaped texts,which overcomes the suboptimization problem of Rol operations.Furthermore,potential corpus information is utilized from weak annotations to full labels through mixed supervision,further improving text detection and end-to-end text spotting results.Extensive experiments on various bilingual(i.e.,English and Chinese)benchmarks demonstrate the superiority of our method.Especially on the TDA-ReCTS dataset,TextFormer surpasses the state-of-the-art method in terms of 1-NED by 13.2%.