Visual StudioNET 开发环境本身有很多与语言无关的特性。而正是这些很优秀的开发环境特性,令Visual Studio NET成为大家认可的杰出开发工具。在这篇文章里,作者给出了他们最喜爱的一些环境特性——他们认为这是每个开发者都应该了解并...Visual StudioNET 开发环境本身有很多与语言无关的特性。而正是这些很优秀的开发环境特性,令Visual Studio NET成为大家认可的杰出开发工具。在这篇文章里,作者给出了他们最喜爱的一些环境特性——他们认为这是每个开发者都应该了解并且称赞的特性。其中包括:调试存储过程支持、项目引用管理、类似图中的元数据(metadata)、借助于宏来自定义开发环境的程序设计等等。读者可以在CSDN网站《程序员》频道下载本文中的示例代码。展开更多
【目的】针对风电法兰分类细、规格多、直径大、孔数多,导致多孔加工坐标计算量大、输入效率低,且极坐标、旋转坐标及宏程序、二次开发等加工方案难以满足法兰生产企业实际生产需求的问题,提出一种高效解决方案。【方法】基于Visual Stu...【目的】针对风电法兰分类细、规格多、直径大、孔数多,导致多孔加工坐标计算量大、输入效率低,且极坐标、旋转坐标及宏程序、二次开发等加工方案难以满足法兰生产企业实际生产需求的问题,提出一种高效解决方案。【方法】基于Visual Studio 2022开发平台,开发了一款高效实用、能灵活快速生成螺栓孔加工程序的专用CAM系统。该系统应用了模块化设计思路,把零件信息、加工参数等按相应模块独立处理,有利于系统根据法兰设计标准的变化而及时调整,自动生成不同规格的风电法兰螺栓孔加工程序。【结果】所开发的风电法兰螺栓孔加工CAM系统,实现了多孔加工程序的快速自动生成,显著降低了数控编程员的劳动强度,提高了法兰孔加工生产效率。【结论】未来可进一步对AutoCAD、NX平台进行二次开发,借助平台强大的二维三维图形设计基础,开发基于法兰零件的集设计制造为一体的中小型CAD/CAM系统,以满足企业不断发展的生产管理需求。展开更多
Fig.1.The GenomeSyn tool for visualizing genome synteny and characterizing structural variations.A:The first synteny visualization map showed the detailed information of two or three genomes and can display structural...Fig.1.The GenomeSyn tool for visualizing genome synteny and characterizing structural variations.A:The first synteny visualization map showed the detailed information of two or three genomes and can display structural variations and other annotation information.B:The second type of visualization map was simple and only showed the synteny relationship between the chromosomes of two or three genomes.C:Multiplatform general GenomeSyn submission page,applicable to Windows,MAC and web platforms;other analysis files can be entered in the"other"option.The publisher would like to apologise for any inconvenience caused.展开更多
Siamese tracking algorithms usually take convolutional neural networks(CNNs)as feature extractors owing to their capability of extracting deep discriminative features.However,the convolution kernels in CNNs have limit...Siamese tracking algorithms usually take convolutional neural networks(CNNs)as feature extractors owing to their capability of extracting deep discriminative features.However,the convolution kernels in CNNs have limited receptive fields,making it difficult to capture global feature dependencies which is important for object detection,especially when the target undergoes large-scale variations or movement.In view of this,we develop a novel network called effective convolution mixed Transformer Siamese network(SiamCMT)for visual tracking,which integrates CNN-based and Transformer-based architectures to capture both local information and long-range dependencies.Specifically,we design a Transformer-based module named lightweight multi-head attention(LWMHA)which can be flexibly embedded into stage-wise CNNs and improve the network’s representation ability.Additionally,we introduce a stage-wise feature aggregation mechanism which integrates features learned from multiple stages.By leveraging both location and semantic information,this mechanism helps the SiamCMT to better locate and find the target.Moreover,to distinguish the contribution of different channels,a channel-wise attention mechanism is introduced to enhance the important channels and suppress the others.Extensive experiments on seven challenging benchmarks,i.e.,OTB2015,UAV123,GOT10K,LaSOT,DTB70,UAVTrack112_L,and VOT2018,demonstrate the effectiveness of the proposed algorithm.Specially,the proposed method outperforms the baseline by 3.5%and 3.1%in terms of precision and success rates with a real-time speed of 59.77 FPS on UAV123.展开更多
With the rapid development of intelligent video surveillance technology,pedestrian re-identification has become increasingly important inmulti-camera surveillance systems.This technology plays a critical role in enhan...With the rapid development of intelligent video surveillance technology,pedestrian re-identification has become increasingly important inmulti-camera surveillance systems.This technology plays a critical role in enhancing public safety.However,traditional methods typically process images and text separately,applying upstream models directly to downstream tasks.This approach significantly increases the complexity ofmodel training and computational costs.Furthermore,the common class imbalance in existing training datasets limitsmodel performance improvement.To address these challenges,we propose an innovative framework named Person Re-ID Network Based on Visual Prompt Technology andMulti-Instance Negative Pooling(VPM-Net).First,we incorporate the Contrastive Language-Image Pre-training(CLIP)pre-trained model to accurately map visual and textual features into a unified embedding space,effectively mitigating inconsistencies in data distribution and the training process.To enhancemodel adaptability and generalization,we introduce an efficient and task-specific Visual Prompt Tuning(VPT)technique,which improves the model’s relevance to specific tasks.Additionally,we design two key modules:the Knowledge-Aware Network(KAN)and theMulti-Instance Negative Pooling(MINP)module.The KAN module significantly enhances the model’s understanding of complex scenarios through deep contextual semantic modeling.MINP module handles samples,effectively improving the model’s ability to distinguish fine-grained features.The experimental outcomes across diverse datasets underscore the remarkable performance of VPM-Net.These results vividly demonstrate the unique advantages and robust reliability of VPM-Net in fine-grained retrieval tasks.展开更多
Visual Place Recognition(VPR)technology aims to use visual information to judge the location of agents,which plays an irreplaceable role in tasks such as loop closure detection and relocation.It is well known that pre...Visual Place Recognition(VPR)technology aims to use visual information to judge the location of agents,which plays an irreplaceable role in tasks such as loop closure detection and relocation.It is well known that previous VPR algorithms emphasize the extraction and integration of general image features,while ignoring the mining of salient features that play a key role in the discrimination of VPR tasks.To this end,this paper proposes a Domain-invariant Information Extraction and Optimization Network(DIEONet)for VPR.The core of the algorithm is a newly designed Domain-invariant Information Mining Module(DIMM)and a Multi-sample Joint Triplet Loss(MJT Loss).Specifically,DIMM incorporates the interdependence between different spatial regions of the feature map in the cascaded convolutional unit group,which enhances the model’s attention to the domain-invariant static object class.MJT Loss introduces the“joint processing of multiple samples”mechanism into the original triplet loss,and adds a new distance constraint term for“positive and negative”samples,so that the model can avoid falling into local optimum during training.We demonstrate the effectiveness of our algorithm by conducting extensive experiments on several authoritative benchmarks.In particular,the proposed method achieves the best performance on the TokyoTM dataset with a Recall@1 metric of 92.89%.展开更多
Complex network modeling characterizes system relationships and structures,while network visualization enables intuitive analysis and interpretation of these patterns.However,existing network visualization tools exhib...Complex network modeling characterizes system relationships and structures,while network visualization enables intuitive analysis and interpretation of these patterns.However,existing network visualization tools exhibit significant limitations in representing attributes of complex networks at various scales,particularly failing to provide advanced visual representations of specific nodes and edges,community affiliation attribution,and global scalability.These limitations substantially impede the intuitive analysis and interpretation of complex network patterns through visual representation.To address these limitations,we propose SFFSlib,a multi-scale network visualization framework incorporating novel methods to highlight attribute representation in diverse network scenarios and optimize structural feature visualization.Notably,we have enhanced the visualization of pivotal details at different scales across diverse network scenarios.The visualization algorithms proposed within SFFSlib were applied to real-world datasets and benchmarked against conventional layout algorithms.The experimental results reveal that SFFSlib significantly enhances the clarity of visualizations across different scales,offering a practical solution for the advancement of network attribute representation and the overall enhancement of visualization quality.展开更多
针对农村配电网的无功补偿方式,首先分析了固定电容补偿器、静止无功补偿器(Static Var Compensator,SVC)和静止无功发生器(Static Var Generator,SVG)3种常规方式。当选择适当的补偿方式后,需要面临大量的计算任务。为提高计算效率、...针对农村配电网的无功补偿方式,首先分析了固定电容补偿器、静止无功补偿器(Static Var Compensator,SVC)和静止无功发生器(Static Var Generator,SVG)3种常规方式。当选择适当的补偿方式后,需要面临大量的计算任务。为提高计算效率、增加数据的准确性,将Visual Studio仿真软件和补偿设计融为一体。基于该仿真软件编译计算公式,通过仿真得到功率因数等相关数据。展开更多
文摘Visual StudioNET 开发环境本身有很多与语言无关的特性。而正是这些很优秀的开发环境特性,令Visual Studio NET成为大家认可的杰出开发工具。在这篇文章里,作者给出了他们最喜爱的一些环境特性——他们认为这是每个开发者都应该了解并且称赞的特性。其中包括:调试存储过程支持、项目引用管理、类似图中的元数据(metadata)、借助于宏来自定义开发环境的程序设计等等。读者可以在CSDN网站《程序员》频道下载本文中的示例代码。
文摘【目的】针对风电法兰分类细、规格多、直径大、孔数多,导致多孔加工坐标计算量大、输入效率低,且极坐标、旋转坐标及宏程序、二次开发等加工方案难以满足法兰生产企业实际生产需求的问题,提出一种高效解决方案。【方法】基于Visual Studio 2022开发平台,开发了一款高效实用、能灵活快速生成螺栓孔加工程序的专用CAM系统。该系统应用了模块化设计思路,把零件信息、加工参数等按相应模块独立处理,有利于系统根据法兰设计标准的变化而及时调整,自动生成不同规格的风电法兰螺栓孔加工程序。【结果】所开发的风电法兰螺栓孔加工CAM系统,实现了多孔加工程序的快速自动生成,显著降低了数控编程员的劳动强度,提高了法兰孔加工生产效率。【结论】未来可进一步对AutoCAD、NX平台进行二次开发,借助平台强大的二维三维图形设计基础,开发基于法兰零件的集设计制造为一体的中小型CAD/CAM系统,以满足企业不断发展的生产管理需求。
文摘Fig.1.The GenomeSyn tool for visualizing genome synteny and characterizing structural variations.A:The first synteny visualization map showed the detailed information of two or three genomes and can display structural variations and other annotation information.B:The second type of visualization map was simple and only showed the synteny relationship between the chromosomes of two or three genomes.C:Multiplatform general GenomeSyn submission page,applicable to Windows,MAC and web platforms;other analysis files can be entered in the"other"option.The publisher would like to apologise for any inconvenience caused.
基金supported by the National Natural Science Foundation of China(Grant No.62033007)the Major Fundamental Research Program of Shandong Province(Grant No.ZR2023ZD37).
文摘Siamese tracking algorithms usually take convolutional neural networks(CNNs)as feature extractors owing to their capability of extracting deep discriminative features.However,the convolution kernels in CNNs have limited receptive fields,making it difficult to capture global feature dependencies which is important for object detection,especially when the target undergoes large-scale variations or movement.In view of this,we develop a novel network called effective convolution mixed Transformer Siamese network(SiamCMT)for visual tracking,which integrates CNN-based and Transformer-based architectures to capture both local information and long-range dependencies.Specifically,we design a Transformer-based module named lightweight multi-head attention(LWMHA)which can be flexibly embedded into stage-wise CNNs and improve the network’s representation ability.Additionally,we introduce a stage-wise feature aggregation mechanism which integrates features learned from multiple stages.By leveraging both location and semantic information,this mechanism helps the SiamCMT to better locate and find the target.Moreover,to distinguish the contribution of different channels,a channel-wise attention mechanism is introduced to enhance the important channels and suppress the others.Extensive experiments on seven challenging benchmarks,i.e.,OTB2015,UAV123,GOT10K,LaSOT,DTB70,UAVTrack112_L,and VOT2018,demonstrate the effectiveness of the proposed algorithm.Specially,the proposed method outperforms the baseline by 3.5%and 3.1%in terms of precision and success rates with a real-time speed of 59.77 FPS on UAV123.
基金funded by the Key Research and Development Program of Hubei Province,China(Grant No.2023BEB024)the Young and Middle-aged Scientific and Technological Innova-tion Team Plan in Higher Education Institutions inHubei Province,China(GrantNo.T2023007)the key projects ofHubei Provincial Department of Education(No.D20161403).
文摘With the rapid development of intelligent video surveillance technology,pedestrian re-identification has become increasingly important inmulti-camera surveillance systems.This technology plays a critical role in enhancing public safety.However,traditional methods typically process images and text separately,applying upstream models directly to downstream tasks.This approach significantly increases the complexity ofmodel training and computational costs.Furthermore,the common class imbalance in existing training datasets limitsmodel performance improvement.To address these challenges,we propose an innovative framework named Person Re-ID Network Based on Visual Prompt Technology andMulti-Instance Negative Pooling(VPM-Net).First,we incorporate the Contrastive Language-Image Pre-training(CLIP)pre-trained model to accurately map visual and textual features into a unified embedding space,effectively mitigating inconsistencies in data distribution and the training process.To enhancemodel adaptability and generalization,we introduce an efficient and task-specific Visual Prompt Tuning(VPT)technique,which improves the model’s relevance to specific tasks.Additionally,we design two key modules:the Knowledge-Aware Network(KAN)and theMulti-Instance Negative Pooling(MINP)module.The KAN module significantly enhances the model’s understanding of complex scenarios through deep contextual semantic modeling.MINP module handles samples,effectively improving the model’s ability to distinguish fine-grained features.The experimental outcomes across diverse datasets underscore the remarkable performance of VPM-Net.These results vividly demonstrate the unique advantages and robust reliability of VPM-Net in fine-grained retrieval tasks.
基金supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region under grant number 2022D01B186.
文摘Visual Place Recognition(VPR)technology aims to use visual information to judge the location of agents,which plays an irreplaceable role in tasks such as loop closure detection and relocation.It is well known that previous VPR algorithms emphasize the extraction and integration of general image features,while ignoring the mining of salient features that play a key role in the discrimination of VPR tasks.To this end,this paper proposes a Domain-invariant Information Extraction and Optimization Network(DIEONet)for VPR.The core of the algorithm is a newly designed Domain-invariant Information Mining Module(DIMM)and a Multi-sample Joint Triplet Loss(MJT Loss).Specifically,DIMM incorporates the interdependence between different spatial regions of the feature map in the cascaded convolutional unit group,which enhances the model’s attention to the domain-invariant static object class.MJT Loss introduces the“joint processing of multiple samples”mechanism into the original triplet loss,and adds a new distance constraint term for“positive and negative”samples,so that the model can avoid falling into local optimum during training.We demonstrate the effectiveness of our algorithm by conducting extensive experiments on several authoritative benchmarks.In particular,the proposed method achieves the best performance on the TokyoTM dataset with a Recall@1 metric of 92.89%.
基金supported by the National Natural Science Foundation of China(Grant Nos.61773091 and 62476045)the LiaoNing Revitalization Talents Program(Grant No.XLYC1807106)the Program for the Outstanding Innovative Teams of Higher Learning Institutions of Liaoning(Grant No.LR2016070).
文摘Complex network modeling characterizes system relationships and structures,while network visualization enables intuitive analysis and interpretation of these patterns.However,existing network visualization tools exhibit significant limitations in representing attributes of complex networks at various scales,particularly failing to provide advanced visual representations of specific nodes and edges,community affiliation attribution,and global scalability.These limitations substantially impede the intuitive analysis and interpretation of complex network patterns through visual representation.To address these limitations,we propose SFFSlib,a multi-scale network visualization framework incorporating novel methods to highlight attribute representation in diverse network scenarios and optimize structural feature visualization.Notably,we have enhanced the visualization of pivotal details at different scales across diverse network scenarios.The visualization algorithms proposed within SFFSlib were applied to real-world datasets and benchmarked against conventional layout algorithms.The experimental results reveal that SFFSlib significantly enhances the clarity of visualizations across different scales,offering a practical solution for the advancement of network attribute representation and the overall enhancement of visualization quality.
文摘针对农村配电网的无功补偿方式,首先分析了固定电容补偿器、静止无功补偿器(Static Var Compensator,SVC)和静止无功发生器(Static Var Generator,SVG)3种常规方式。当选择适当的补偿方式后,需要面临大量的计算任务。为提高计算效率、增加数据的准确性,将Visual Studio仿真软件和补偿设计融为一体。基于该仿真软件编译计算公式,通过仿真得到功率因数等相关数据。