Fig.1.The GenomeSyn tool for visualizing genome synteny and characterizing structural variations.A:The first synteny visualization map showed the detailed information of two or three genomes and can display structural...Fig.1.The GenomeSyn tool for visualizing genome synteny and characterizing structural variations.A:The first synteny visualization map showed the detailed information of two or three genomes and can display structural variations and other annotation information.B:The second type of visualization map was simple and only showed the synteny relationship between the chromosomes of two or three genomes.C:Multiplatform general GenomeSyn submission page,applicable to Windows,MAC and web platforms;other analysis files can be entered in the"other"option.The publisher would like to apologise for any inconvenience caused.展开更多
Siamese tracking algorithms usually take convolutional neural networks(CNNs)as feature extractors owing to their capability of extracting deep discriminative features.However,the convolution kernels in CNNs have limit...Siamese tracking algorithms usually take convolutional neural networks(CNNs)as feature extractors owing to their capability of extracting deep discriminative features.However,the convolution kernels in CNNs have limited receptive fields,making it difficult to capture global feature dependencies which is important for object detection,especially when the target undergoes large-scale variations or movement.In view of this,we develop a novel network called effective convolution mixed Transformer Siamese network(SiamCMT)for visual tracking,which integrates CNN-based and Transformer-based architectures to capture both local information and long-range dependencies.Specifically,we design a Transformer-based module named lightweight multi-head attention(LWMHA)which can be flexibly embedded into stage-wise CNNs and improve the network’s representation ability.Additionally,we introduce a stage-wise feature aggregation mechanism which integrates features learned from multiple stages.By leveraging both location and semantic information,this mechanism helps the SiamCMT to better locate and find the target.Moreover,to distinguish the contribution of different channels,a channel-wise attention mechanism is introduced to enhance the important channels and suppress the others.Extensive experiments on seven challenging benchmarks,i.e.,OTB2015,UAV123,GOT10K,LaSOT,DTB70,UAVTrack112_L,and VOT2018,demonstrate the effectiveness of the proposed algorithm.Specially,the proposed method outperforms the baseline by 3.5%and 3.1%in terms of precision and success rates with a real-time speed of 59.77 FPS on UAV123.展开更多
With the rapid development of intelligent video surveillance technology,pedestrian re-identification has become increasingly important inmulti-camera surveillance systems.This technology plays a critical role in enhan...With the rapid development of intelligent video surveillance technology,pedestrian re-identification has become increasingly important inmulti-camera surveillance systems.This technology plays a critical role in enhancing public safety.However,traditional methods typically process images and text separately,applying upstream models directly to downstream tasks.This approach significantly increases the complexity ofmodel training and computational costs.Furthermore,the common class imbalance in existing training datasets limitsmodel performance improvement.To address these challenges,we propose an innovative framework named Person Re-ID Network Based on Visual Prompt Technology andMulti-Instance Negative Pooling(VPM-Net).First,we incorporate the Contrastive Language-Image Pre-training(CLIP)pre-trained model to accurately map visual and textual features into a unified embedding space,effectively mitigating inconsistencies in data distribution and the training process.To enhancemodel adaptability and generalization,we introduce an efficient and task-specific Visual Prompt Tuning(VPT)technique,which improves the model’s relevance to specific tasks.Additionally,we design two key modules:the Knowledge-Aware Network(KAN)and theMulti-Instance Negative Pooling(MINP)module.The KAN module significantly enhances the model’s understanding of complex scenarios through deep contextual semantic modeling.MINP module handles samples,effectively improving the model’s ability to distinguish fine-grained features.The experimental outcomes across diverse datasets underscore the remarkable performance of VPM-Net.These results vividly demonstrate the unique advantages and robust reliability of VPM-Net in fine-grained retrieval tasks.展开更多
Visual Place Recognition(VPR)technology aims to use visual information to judge the location of agents,which plays an irreplaceable role in tasks such as loop closure detection and relocation.It is well known that pre...Visual Place Recognition(VPR)technology aims to use visual information to judge the location of agents,which plays an irreplaceable role in tasks such as loop closure detection and relocation.It is well known that previous VPR algorithms emphasize the extraction and integration of general image features,while ignoring the mining of salient features that play a key role in the discrimination of VPR tasks.To this end,this paper proposes a Domain-invariant Information Extraction and Optimization Network(DIEONet)for VPR.The core of the algorithm is a newly designed Domain-invariant Information Mining Module(DIMM)and a Multi-sample Joint Triplet Loss(MJT Loss).Specifically,DIMM incorporates the interdependence between different spatial regions of the feature map in the cascaded convolutional unit group,which enhances the model’s attention to the domain-invariant static object class.MJT Loss introduces the“joint processing of multiple samples”mechanism into the original triplet loss,and adds a new distance constraint term for“positive and negative”samples,so that the model can avoid falling into local optimum during training.We demonstrate the effectiveness of our algorithm by conducting extensive experiments on several authoritative benchmarks.In particular,the proposed method achieves the best performance on the TokyoTM dataset with a Recall@1 metric of 92.89%.展开更多
Complex network modeling characterizes system relationships and structures,while network visualization enables intuitive analysis and interpretation of these patterns.However,existing network visualization tools exhib...Complex network modeling characterizes system relationships and structures,while network visualization enables intuitive analysis and interpretation of these patterns.However,existing network visualization tools exhibit significant limitations in representing attributes of complex networks at various scales,particularly failing to provide advanced visual representations of specific nodes and edges,community affiliation attribution,and global scalability.These limitations substantially impede the intuitive analysis and interpretation of complex network patterns through visual representation.To address these limitations,we propose SFFSlib,a multi-scale network visualization framework incorporating novel methods to highlight attribute representation in diverse network scenarios and optimize structural feature visualization.Notably,we have enhanced the visualization of pivotal details at different scales across diverse network scenarios.The visualization algorithms proposed within SFFSlib were applied to real-world datasets and benchmarked against conventional layout algorithms.The experimental results reveal that SFFSlib significantly enhances the clarity of visualizations across different scales,offering a practical solution for the advancement of network attribute representation and the overall enhancement of visualization quality.展开更多
文摘Fig.1.The GenomeSyn tool for visualizing genome synteny and characterizing structural variations.A:The first synteny visualization map showed the detailed information of two or three genomes and can display structural variations and other annotation information.B:The second type of visualization map was simple and only showed the synteny relationship between the chromosomes of two or three genomes.C:Multiplatform general GenomeSyn submission page,applicable to Windows,MAC and web platforms;other analysis files can be entered in the"other"option.The publisher would like to apologise for any inconvenience caused.
基金supported by the National Natural Science Foundation of China(Grant No.62033007)the Major Fundamental Research Program of Shandong Province(Grant No.ZR2023ZD37).
文摘Siamese tracking algorithms usually take convolutional neural networks(CNNs)as feature extractors owing to their capability of extracting deep discriminative features.However,the convolution kernels in CNNs have limited receptive fields,making it difficult to capture global feature dependencies which is important for object detection,especially when the target undergoes large-scale variations or movement.In view of this,we develop a novel network called effective convolution mixed Transformer Siamese network(SiamCMT)for visual tracking,which integrates CNN-based and Transformer-based architectures to capture both local information and long-range dependencies.Specifically,we design a Transformer-based module named lightweight multi-head attention(LWMHA)which can be flexibly embedded into stage-wise CNNs and improve the network’s representation ability.Additionally,we introduce a stage-wise feature aggregation mechanism which integrates features learned from multiple stages.By leveraging both location and semantic information,this mechanism helps the SiamCMT to better locate and find the target.Moreover,to distinguish the contribution of different channels,a channel-wise attention mechanism is introduced to enhance the important channels and suppress the others.Extensive experiments on seven challenging benchmarks,i.e.,OTB2015,UAV123,GOT10K,LaSOT,DTB70,UAVTrack112_L,and VOT2018,demonstrate the effectiveness of the proposed algorithm.Specially,the proposed method outperforms the baseline by 3.5%and 3.1%in terms of precision and success rates with a real-time speed of 59.77 FPS on UAV123.
基金funded by the Key Research and Development Program of Hubei Province,China(Grant No.2023BEB024)the Young and Middle-aged Scientific and Technological Innova-tion Team Plan in Higher Education Institutions inHubei Province,China(GrantNo.T2023007)the key projects ofHubei Provincial Department of Education(No.D20161403).
文摘With the rapid development of intelligent video surveillance technology,pedestrian re-identification has become increasingly important inmulti-camera surveillance systems.This technology plays a critical role in enhancing public safety.However,traditional methods typically process images and text separately,applying upstream models directly to downstream tasks.This approach significantly increases the complexity ofmodel training and computational costs.Furthermore,the common class imbalance in existing training datasets limitsmodel performance improvement.To address these challenges,we propose an innovative framework named Person Re-ID Network Based on Visual Prompt Technology andMulti-Instance Negative Pooling(VPM-Net).First,we incorporate the Contrastive Language-Image Pre-training(CLIP)pre-trained model to accurately map visual and textual features into a unified embedding space,effectively mitigating inconsistencies in data distribution and the training process.To enhancemodel adaptability and generalization,we introduce an efficient and task-specific Visual Prompt Tuning(VPT)technique,which improves the model’s relevance to specific tasks.Additionally,we design two key modules:the Knowledge-Aware Network(KAN)and theMulti-Instance Negative Pooling(MINP)module.The KAN module significantly enhances the model’s understanding of complex scenarios through deep contextual semantic modeling.MINP module handles samples,effectively improving the model’s ability to distinguish fine-grained features.The experimental outcomes across diverse datasets underscore the remarkable performance of VPM-Net.These results vividly demonstrate the unique advantages and robust reliability of VPM-Net in fine-grained retrieval tasks.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region under grant number 2022D01B186.
文摘Visual Place Recognition(VPR)technology aims to use visual information to judge the location of agents,which plays an irreplaceable role in tasks such as loop closure detection and relocation.It is well known that previous VPR algorithms emphasize the extraction and integration of general image features,while ignoring the mining of salient features that play a key role in the discrimination of VPR tasks.To this end,this paper proposes a Domain-invariant Information Extraction and Optimization Network(DIEONet)for VPR.The core of the algorithm is a newly designed Domain-invariant Information Mining Module(DIMM)and a Multi-sample Joint Triplet Loss(MJT Loss).Specifically,DIMM incorporates the interdependence between different spatial regions of the feature map in the cascaded convolutional unit group,which enhances the model’s attention to the domain-invariant static object class.MJT Loss introduces the“joint processing of multiple samples”mechanism into the original triplet loss,and adds a new distance constraint term for“positive and negative”samples,so that the model can avoid falling into local optimum during training.We demonstrate the effectiveness of our algorithm by conducting extensive experiments on several authoritative benchmarks.In particular,the proposed method achieves the best performance on the TokyoTM dataset with a Recall@1 metric of 92.89%.
基金supported by the National Natural Science Foundation of China(Grant Nos.61773091 and 62476045)the LiaoNing Revitalization Talents Program(Grant No.XLYC1807106)the Program for the Outstanding Innovative Teams of Higher Learning Institutions of Liaoning(Grant No.LR2016070).
文摘Complex network modeling characterizes system relationships and structures,while network visualization enables intuitive analysis and interpretation of these patterns.However,existing network visualization tools exhibit significant limitations in representing attributes of complex networks at various scales,particularly failing to provide advanced visual representations of specific nodes and edges,community affiliation attribution,and global scalability.These limitations substantially impede the intuitive analysis and interpretation of complex network patterns through visual representation.To address these limitations,we propose SFFSlib,a multi-scale network visualization framework incorporating novel methods to highlight attribute representation in diverse network scenarios and optimize structural feature visualization.Notably,we have enhanced the visualization of pivotal details at different scales across diverse network scenarios.The visualization algorithms proposed within SFFSlib were applied to real-world datasets and benchmarked against conventional layout algorithms.The experimental results reveal that SFFSlib significantly enhances the clarity of visualizations across different scales,offering a practical solution for the advancement of network attribute representation and the overall enhancement of visualization quality.