期刊文献+
共找到2篇文章
< 1 >
每页显示 20 50 100
Language interprets vision:Adaptive encoding and decoding for referring image segmentation
1
作者 Qi A Sanyuan Zhao +1 位作者 xingping dong Jianbing Shen 《Computational Visual Media》 2026年第1期189-202,共14页
Referring image segmentation aims to segment the referent with natural linguistic expressions.Due to the distinct modality properties of the image and language,it is challenging to effectively align token embeddings w... Referring image segmentation aims to segment the referent with natural linguistic expressions.Due to the distinct modality properties of the image and language,it is challenging to effectively align token embeddings with visual regions.Different from existing methods of coordinate linguistics for the specific visual region,we propose a novel referring image segmentation paradigm,language interprets vision(LIV),which densely fine-grained aligns the visual and linguistic modalities,and fuse the multi-modal biases effectively.LIV resorts to re-encoding visual features on compositional dimensions of<Height,Width,Channel>,which interprets vision through linguistic expression and makes cross-modality alignment denser.More specifically,we innovatively consider the adjacency of visual regions on the channel level to promote channel semantic consistency and propagate fine-grained semantics in the whole segmentation procedure.In addition,we also theoretically analyze that LIV effectively enriches the representation space and makes the comprehensive modality-fused biases more generalized,which boosts the precision of mask prediction.Extensive experimental results on three benchmarks validate that our proposed framework significantly outperforms other methods by a remarkable margin. 展开更多
关键词 referring image segmentation(RIS) cross modal Transformer attention segmentation
原文传递
TextFormer: A Query-based End-to-end Text Spotter with Mixed Supervision
2
作者 Yukun Zhai Xiaoqiang Zhang +3 位作者 Xiameng Qin Sanyuan Zhao xingping dong Jianbing Shen 《Machine Intelligence Research》 EI CSCD 2024年第4期704-717,共14页
End-to-end text spotting is a vital computer vision task that aims to integrate scene text detection and recognition into a unified framework.Typical methods heavily rely on region-of-interest(Rol)operations to extrac... End-to-end text spotting is a vital computer vision task that aims to integrate scene text detection and recognition into a unified framework.Typical methods heavily rely on region-of-interest(Rol)operations to extract local features and complex post-processing steps to produce final predictions.To address these limitations,we propose TextFormer,a query-based end-to-end text spotter with a transformer architecture.Specifically,using query embedding per text instance,TextFormer builds upon an image encoder and a text decoder to learn a joint semantic understanding for multitask modeling.It allows for mutual training and optimization of classification,segmentation and recognition branches,resulting in deeper feature sharing without sacrificing flexibility or simplicity.Additionally,we design an adaptive global aggregation(AGG)module to transfer global features into sequential features for reading arbitrarilyshaped texts,which overcomes the suboptimization problem of Rol operations.Furthermore,potential corpus information is utilized from weak annotations to full labels through mixed supervision,further improving text detection and end-to-end text spotting results.Extensive experiments on various bilingual(i.e.,English and Chinese)benchmarks demonstrate the superiority of our method.Especially on the TDA-ReCTS dataset,TextFormer surpasses the state-of-the-art method in terms of 1-NED by 13.2%. 展开更多
关键词 End-to-end text spotting arbitrarily-shaped texts transformer mixed supervision multitask modeling.
原文传递
上一页 1 下一页 到第
使用帮助 返回顶部