This paper introduces a novel method for medical image retrieval and classification by integrating a multi-scale encoding mechanism with Vision Transformer(ViT)architectures and a dynamic multi-loss function.The multi...This paper introduces a novel method for medical image retrieval and classification by integrating a multi-scale encoding mechanism with Vision Transformer(ViT)architectures and a dynamic multi-loss function.The multi-scale encoding significantly enhances the model’s ability to capture both fine-grained and global features,while the dynamic loss function adapts during training to optimize classification accuracy and retrieval performance.Our approach was evaluated on the ISIC-2018 and ChestX-ray14 datasets,yielding notable improvements.Specifically,on the ISIC-2018 dataset,our method achieves an F1-Score improvement of+4.84% compared to the standard ViT,with a precision increase of+5.46% for melanoma(MEL).On the ChestX-ray14 dataset,the method delivers an F1-Score improvement of 5.3%over the conventional ViT,with precision gains of+5.0% for pneumonia(PNEU)and+5.4%for fibrosis(FIB).Experimental results demonstrate that our approach outperforms traditional CNN-based models and existing ViT variants,particularly in retrieving relevant medical cases and enhancing diagnostic accuracy.These findings highlight the potential of the proposedmethod for large-scalemedical image analysis,offering improved tools for clinical decision-making through superior classification and case comparison.展开更多
Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete v...Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete vessel segmentation and poor continuity.In this study,we propose CT-MFENet to address the aforementioned issues.First,the use of context transformer(CT)allows for the integration of contextual feature information,which helps establish the connection between pixels and solve the problem of incomplete vessel continuity.Second,multi-scale dense residual networks are used instead of traditional CNN to address the issue of inadequate local feature extraction when the model encounters vessels at multiple scales.In the decoding stage,we introduce a local-global fusion module.It enhances the localization of vascular information and reduces the semantic gap between high-and low-level features.To address the class imbalance in retinal images,we propose a hybrid loss function that enhances the segmentation ability of the model for topological structures.We conducted experiments on the publicly available DRIVE,CHASEDB1,STARE,and IOSTAR datasets.The experimental results show that our CT-MFENet performs better than most existing methods,including the baseline U-Net.展开更多
The capacity to diagnose faults in rolling bearings is of significant practical importance to ensure the normal operation of the equipment.Frequency-domain features can effectively enhance the identification of fault ...The capacity to diagnose faults in rolling bearings is of significant practical importance to ensure the normal operation of the equipment.Frequency-domain features can effectively enhance the identification of fault modes.However,existing methods often suffer from insufficient frequency-domain representation in practical applications,which greatly affects diagnostic performance.Therefore,this paper proposes a rolling bearing fault diagnosismethod based on aMulti-Scale FusionNetwork(MSFN)using the Time-Division Fourier Transform(TDFT).The method constructs multi-scale channels to extract time-domain and frequency-domain features of the signal in parallel.A multi-level,multi-scale filter-based approach is designed to extract frequency-domain features in a segmented manner.A cross-attention mechanism is introduced to facilitate the fusion of the extracted time-frequency domain features.The performance of the proposed method is validated using the CWRU and Ottawa datasets.The results show that the average accuracy of MSFN under complex noisy signals is 97.75%and 94.41%.The average accuracy under variable load conditions is 98.68%.This demonstrates its significant application potential compared to existing methods.展开更多
本文提出了一种基于双交叉注意力融合的Swin-AK Transformer(Swin Transformer based on alterable kernel convolution)和手工特征相结合的智能手机拍摄图像质量评价方法。首先,提取了影响图像质量的手工特征,这些特征可以捕捉到图像...本文提出了一种基于双交叉注意力融合的Swin-AK Transformer(Swin Transformer based on alterable kernel convolution)和手工特征相结合的智能手机拍摄图像质量评价方法。首先,提取了影响图像质量的手工特征,这些特征可以捕捉到图像中细微的视觉变化;其次,提出了Swin-AK Transformer,增强了模型对局部信息的提取和处理能力。此外,本文设计了双交叉注意力融合模块,结合空间注意力和通道注意力机制,融合了手工特征与深度特征,实现了更加精确的图像质量预测。实验结果表明,在SPAQ和LIVE-C数据集上,皮尔森线性相关系数分别达到0.932和0.885,斯皮尔曼等级排序相关系数分别达到0.929和0.858。上述结果证明了本文提出的方法能够有效地预测智能手机拍摄图像的质量。展开更多
【目的】高分辨率遥感影像语义分割通过精准提取地物信息,为城市规划、土地分析利用提供了重要的数据支持。当前分割方法通常将遥感影像划分为标准块,进行多尺度局部分割和层次推理,未充分考虑影像中的上下文先验知识和局部特征交互能力...【目的】高分辨率遥感影像语义分割通过精准提取地物信息,为城市规划、土地分析利用提供了重要的数据支持。当前分割方法通常将遥感影像划分为标准块,进行多尺度局部分割和层次推理,未充分考虑影像中的上下文先验知识和局部特征交互能力,影响了推理分割质量。【方法】为了解决这一问题,本文提出了一种联合跨尺度注意力和语义视觉Transformer的遥感影像分割框架(Cross-scale Attention Transformer,CATrans),融合跨尺度注意力模块和语义视觉Transformer,提取上下文先验知识增强局部特征表示和分割性能。首先,跨尺度注意力模块通过空间和通道两个维度进行并行特征处理,分析浅层-深层和局部-全局特征之间的依赖关系,提升对遥感影像中不同粒度对象的注意力。其次,语义视觉Transformer通过空间注意力机制捕捉上下文语义信息,建模语义信息之间的依赖关系。【结果】本文在DeepGlobe、Inria Aerial和LoveDA数据集上进行对比实验,结果表明:CATrans的分割性能优于现有的WSDNet(Discrete Wavelet Smooth Network)和ISDNet(Integrating Shallow and Deep Network)等分割算法,分别取得了76.2%、79.2%、54.2%的平均交并比(Mean Intersection over Union,mIoU)和86.5%、87.8%、66.8%的平均F1得分(Mean F1 Score,mF1),推理速度分别达到38.1 FPS、13.2 FPS和95.22 FPS。相较于本文所对比的最佳方法WSDNet,mIoU和mF1在3个数据集中分别提升2.1%、4.0%、5.3%和1.3%、1.8%、5.6%,在每类地物的分割中都具有显著优势。【结论】本方法实现了高效率、高精度的高分辨率遥感影像语义分割。展开更多
提出了一种Transformer与图网络相结合的网络模型,用于对视觉传感器采集到的视频图像进行三维人体姿态估计。Transformer能够有效地从二维关键关节点中提取时空维度高相关性特征,而图网络则能够感知细节相关性特征,通过融合这两种网络结...提出了一种Transformer与图网络相结合的网络模型,用于对视觉传感器采集到的视频图像进行三维人体姿态估计。Transformer能够有效地从二维关键关节点中提取时空维度高相关性特征,而图网络则能够感知细节相关性特征,通过融合这两种网络结构,提高了三维姿态估计的精度。在公开数据集Human3.6M上进行了仿真实验,验证了Transformer与图卷积融合算法的性能。实验结果显示,最终估计得到的三维人体关节点的平均关节点位置偏差(Mean Per Joint Position Error,MPJPE)为38.4 mm,相较于现有方法有一定提升,表明该方法具有较强的应用价值,可应用于许多下游相关工作中。展开更多
Recently,there has been a widespread application of deep learning in object detection with Synthetic Aperture Radar(SAR).The current algorithms based on Convolutional Neural Networks(CNN)often achieve good accuracy at...Recently,there has been a widespread application of deep learning in object detection with Synthetic Aperture Radar(SAR).The current algorithms based on Convolutional Neural Networks(CNN)often achieve good accuracy at the expense of more complex model structures and huge parameters,which poses a great challenge for real-time and accurate detection of multi-scale targets.To address these problems,we propose a lightweight real-time SAR ship object detector based on detection transformer(LSD-DETR)in this study.First,a lightweight backbone network LCNet containing a stem module and inverted residual structure is constructed to balance the inference speed and detection accuracy of model.Second,we design a transformer encoder with Cascaded Group Attention(CGA Encoder)to enrich the feature information of small targets in SAR images,which makes detection of small-sized ships more precise.Third,an efficient cross-scale feature fusion pyramid module(C3Het-FPN)is proposed through the lightweight units(C3Het)and the introduction of the weighted bidirectional feature pyramid(BiFPN)structure,which realizes the adaptive fusion of multi-scale features with fewer parameters.Ablation experiments and comparative experiments demonstrate the effectiveness of LSD-DETR.The model parameter of LSD-DETR is 8.8 M(only 20.6%of DETR),the model’s FPS reached 43.1,the average detection accuracy mAP50 on the SSDD and HRSID datasets reached 97.3%and 93.4%.Compared to advanced methods,the LSD-DETR can attain superior precision with fewer parameters,which enables accurate real-time object detection of multi-scale ships in SAR images.展开更多
Chengdu teahouses,as core public spaces in marketplace society,have undergone transformative reconstruction-from“containers of everyday life”to“containers of commercial traffic and digital flows”-during the proces...Chengdu teahouses,as core public spaces in marketplace society,have undergone transformative reconstruction-from“containers of everyday life”to“containers of commercial traffic and digital flows”-during the process of modernization.Employing spatial archaeology as a methodology,combined with fieldwork and analysis of historical documents,this study systematically examines the diachronic evolution of architectural forms,functional orientations,and social networks within Chengdu teahouses.The study reveals the logic of spatial reconstruction under the interplay of multiple forces,including cultural heritage preservation,capital-driven development,and technological intervention.The findings identify three paradigms of spatial transformation in teahouses.First,heritage specimenization,which reinforces the continuity of collective memory through symbolic extraction but risks diminishing the vitality of everyday social interactions.Second,consumption upgrading,which caters to the demands of emerging groups through iterative business models yet necessitates vigilance against spatial differentiation eroding marketplace inclusivity.Third,digital parasitism,which expands communicative dimensions through technological empowerment but confronts the risk of flattening localized knowledge.These paradigms reflect both adaptive responses of traditional spaces to contemporary pressure and the tension of reconstruction imposed by instrumental rationality on marketplace networks.The study demonstrates that spatial transformation in Chengdu teahouses is not unidirectional alienation but rather a multifaceted configuration where the continuity of tradition coexists with innovative practices amid functional diversification.This research advocates for striking a balance between the preservation of traditional spaces and modern renewal and explores organic integration approaches for traditional and modern elements,thereby providing a theoretical framework and practical insights for the transformation of traditional public spaces.展开更多
Recent years have seen a surge in interest in object detection on remote sensing images for applications such as surveillance andmanagement.However,challenges like small object detection,scale variation,and the presen...Recent years have seen a surge in interest in object detection on remote sensing images for applications such as surveillance andmanagement.However,challenges like small object detection,scale variation,and the presence of closely packed objects in these images hinder accurate detection.Additionally,the motion blur effect further complicates the identification of such objects.To address these issues,we propose enhanced YOLOv9 with a transformer head(YOLOv9-TH).The model introduces an additional prediction head for detecting objects of varying sizes and swaps the original prediction heads for transformer heads to leverage self-attention mechanisms.We further improve YOLOv9-TH using several strategies,including data augmentation,multi-scale testing,multi-model integration,and the introduction of an additional classifier.The cross-stage partial(CSP)method and the ghost convolution hierarchical graph(GCHG)are combined to improve detection accuracy by better utilizing feature maps,widening the receptive field,and precisely extracting multi-scale objects.Additionally,we incorporate the E-SimAM attention mechanism to address low-resolution feature loss.Extensive experiments on the VisDrone2021 and DIOR datasets demonstrate the effectiveness of YOLOv9-TH,showing good improvement in mAP compared to the best existing methods.The YOLOv9-TH-e achieved 54.2% of mAP50 on the VisDrone2021 dataset and 92.3% of mAP on the DIOR dataset.The results confirmthemodel’s robustness and suitability for real-world applications,particularly for small object detection in remote sensing images.展开更多
Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variati...Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variations inUAV flight altitude,differences in object scales,as well as factors like flight speed and motion blur.To enhancethe detection efficacy of small targets in drone aerial imagery,we propose an enhanced You Only Look Onceversion 7(YOLOv7)algorithm based on multi-scale spatial context.We build the MSC-YOLO model,whichincorporates an additional prediction head,denoted as P2,to improve adaptability for small objects.We replaceconventional downsampling with a Spatial-to-Depth Convolutional Combination(CSPDC)module to mitigatethe loss of intricate feature details related to small objects.Furthermore,we propose a Spatial Context Pyramidwith Multi-Scale Attention(SCPMA)module,which captures spatial and channel-dependent features of smalltargets acrossmultiple scales.This module enhances the perception of spatial contextual features and the utilizationof multiscale feature information.On the Visdrone2023 and UAVDT datasets,MSC-YOLO achieves remarkableresults,outperforming the baseline method YOLOv7 by 3.0%in terms ofmean average precision(mAP).The MSCYOLOalgorithm proposed in this paper has demonstrated satisfactory performance in detecting small targets inUAV aerial photography,providing strong support for practical applications.展开更多
针对无人机场景下行人重识别所呈现的多视角多尺度特点,以及传统的基于卷积神经网络的行人重识别算法受限于局部感受野结构和下采样操作,很难对行人图像的全局特征进行提取且图像空间特征分辨率不高。提出一种无人机场景下基于Transfor...针对无人机场景下行人重识别所呈现的多视角多尺度特点,以及传统的基于卷积神经网络的行人重识别算法受限于局部感受野结构和下采样操作,很难对行人图像的全局特征进行提取且图像空间特征分辨率不高。提出一种无人机场景下基于Transformer的轻量化行人重识别(Lightweight Transformer-based Person Re-Identification,LTReID)算法,利用多头多注意力机制从全局角度提取人体不同部分特征,使用Circle损失和边界样本挖掘损失,以提高图像特征提取和细粒度图像检索性能,并利用快速掩码搜索剪枝算法对Transformer模型进行训练后轻量化,以提高模型的无人机平台部署能力。更进一步,提出一种可学习的面向无人机场景的空间信息嵌入,在训练过程中通过学习获得优化的非视觉信息,以提取无人机多视角下行人的不变特征,提升行人特征识别的鲁棒性。最后,在实际的无人机行人重识别数据库中,讨论了在不同量级主干网和不同剪枝率情况下所提LTReID算法的行人重识别性能,并与多种行人重识别算法进行了性能对比,结果表明了所提算法的有效性和优越性。展开更多
基金funded by the Deanship of Research and Graduate Studies at King Khalid University through small group research under grant number RGP1/278/45.
文摘This paper introduces a novel method for medical image retrieval and classification by integrating a multi-scale encoding mechanism with Vision Transformer(ViT)architectures and a dynamic multi-loss function.The multi-scale encoding significantly enhances the model’s ability to capture both fine-grained and global features,while the dynamic loss function adapts during training to optimize classification accuracy and retrieval performance.Our approach was evaluated on the ISIC-2018 and ChestX-ray14 datasets,yielding notable improvements.Specifically,on the ISIC-2018 dataset,our method achieves an F1-Score improvement of+4.84% compared to the standard ViT,with a precision increase of+5.46% for melanoma(MEL).On the ChestX-ray14 dataset,the method delivers an F1-Score improvement of 5.3%over the conventional ViT,with precision gains of+5.0% for pneumonia(PNEU)and+5.4%for fibrosis(FIB).Experimental results demonstrate that our approach outperforms traditional CNN-based models and existing ViT variants,particularly in retrieving relevant medical cases and enhancing diagnostic accuracy.These findings highlight the potential of the proposedmethod for large-scalemedical image analysis,offering improved tools for clinical decision-making through superior classification and case comparison.
基金the National Natural Science Foundation of China(No.62266025)。
文摘Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete vessel segmentation and poor continuity.In this study,we propose CT-MFENet to address the aforementioned issues.First,the use of context transformer(CT)allows for the integration of contextual feature information,which helps establish the connection between pixels and solve the problem of incomplete vessel continuity.Second,multi-scale dense residual networks are used instead of traditional CNN to address the issue of inadequate local feature extraction when the model encounters vessels at multiple scales.In the decoding stage,we introduce a local-global fusion module.It enhances the localization of vascular information and reduces the semantic gap between high-and low-level features.To address the class imbalance in retinal images,we propose a hybrid loss function that enhances the segmentation ability of the model for topological structures.We conducted experiments on the publicly available DRIVE,CHASEDB1,STARE,and IOSTAR datasets.The experimental results show that our CT-MFENet performs better than most existing methods,including the baseline U-Net.
基金fully supported by the Frontier Exploration Projects of Longmen Laboratory(No.LMQYTSKT034)Key Research and Development and Promotion of Special(Science and Technology)Project of Henan Province,China(No.252102210158)。
文摘The capacity to diagnose faults in rolling bearings is of significant practical importance to ensure the normal operation of the equipment.Frequency-domain features can effectively enhance the identification of fault modes.However,existing methods often suffer from insufficient frequency-domain representation in practical applications,which greatly affects diagnostic performance.Therefore,this paper proposes a rolling bearing fault diagnosismethod based on aMulti-Scale FusionNetwork(MSFN)using the Time-Division Fourier Transform(TDFT).The method constructs multi-scale channels to extract time-domain and frequency-domain features of the signal in parallel.A multi-level,multi-scale filter-based approach is designed to extract frequency-domain features in a segmented manner.A cross-attention mechanism is introduced to facilitate the fusion of the extracted time-frequency domain features.The performance of the proposed method is validated using the CWRU and Ottawa datasets.The results show that the average accuracy of MSFN under complex noisy signals is 97.75%and 94.41%.The average accuracy under variable load conditions is 98.68%.This demonstrates its significant application potential compared to existing methods.
文摘本文提出了一种基于双交叉注意力融合的Swin-AK Transformer(Swin Transformer based on alterable kernel convolution)和手工特征相结合的智能手机拍摄图像质量评价方法。首先,提取了影响图像质量的手工特征,这些特征可以捕捉到图像中细微的视觉变化;其次,提出了Swin-AK Transformer,增强了模型对局部信息的提取和处理能力。此外,本文设计了双交叉注意力融合模块,结合空间注意力和通道注意力机制,融合了手工特征与深度特征,实现了更加精确的图像质量预测。实验结果表明,在SPAQ和LIVE-C数据集上,皮尔森线性相关系数分别达到0.932和0.885,斯皮尔曼等级排序相关系数分别达到0.929和0.858。上述结果证明了本文提出的方法能够有效地预测智能手机拍摄图像的质量。
文摘【目的】高分辨率遥感影像语义分割通过精准提取地物信息,为城市规划、土地分析利用提供了重要的数据支持。当前分割方法通常将遥感影像划分为标准块,进行多尺度局部分割和层次推理,未充分考虑影像中的上下文先验知识和局部特征交互能力,影响了推理分割质量。【方法】为了解决这一问题,本文提出了一种联合跨尺度注意力和语义视觉Transformer的遥感影像分割框架(Cross-scale Attention Transformer,CATrans),融合跨尺度注意力模块和语义视觉Transformer,提取上下文先验知识增强局部特征表示和分割性能。首先,跨尺度注意力模块通过空间和通道两个维度进行并行特征处理,分析浅层-深层和局部-全局特征之间的依赖关系,提升对遥感影像中不同粒度对象的注意力。其次,语义视觉Transformer通过空间注意力机制捕捉上下文语义信息,建模语义信息之间的依赖关系。【结果】本文在DeepGlobe、Inria Aerial和LoveDA数据集上进行对比实验,结果表明:CATrans的分割性能优于现有的WSDNet(Discrete Wavelet Smooth Network)和ISDNet(Integrating Shallow and Deep Network)等分割算法,分别取得了76.2%、79.2%、54.2%的平均交并比(Mean Intersection over Union,mIoU)和86.5%、87.8%、66.8%的平均F1得分(Mean F1 Score,mF1),推理速度分别达到38.1 FPS、13.2 FPS和95.22 FPS。相较于本文所对比的最佳方法WSDNet,mIoU和mF1在3个数据集中分别提升2.1%、4.0%、5.3%和1.3%、1.8%、5.6%,在每类地物的分割中都具有显著优势。【结论】本方法实现了高效率、高精度的高分辨率遥感影像语义分割。
文摘提出了一种Transformer与图网络相结合的网络模型,用于对视觉传感器采集到的视频图像进行三维人体姿态估计。Transformer能够有效地从二维关键关节点中提取时空维度高相关性特征,而图网络则能够感知细节相关性特征,通过融合这两种网络结构,提高了三维姿态估计的精度。在公开数据集Human3.6M上进行了仿真实验,验证了Transformer与图卷积融合算法的性能。实验结果显示,最终估计得到的三维人体关节点的平均关节点位置偏差(Mean Per Joint Position Error,MPJPE)为38.4 mm,相较于现有方法有一定提升,表明该方法具有较强的应用价值,可应用于许多下游相关工作中。
基金National Nature Science Foundation of China(No.U24A20589)National Key Research and Development Program of China(No.2023YFB3905504)+1 种基金Innovation Team of the Ministry of Education of China(No.8091B042227)Innovation Group of Sichuan Natural Science Foundation(No.2023NSFSC1974).
文摘Recently,there has been a widespread application of deep learning in object detection with Synthetic Aperture Radar(SAR).The current algorithms based on Convolutional Neural Networks(CNN)often achieve good accuracy at the expense of more complex model structures and huge parameters,which poses a great challenge for real-time and accurate detection of multi-scale targets.To address these problems,we propose a lightweight real-time SAR ship object detector based on detection transformer(LSD-DETR)in this study.First,a lightweight backbone network LCNet containing a stem module and inverted residual structure is constructed to balance the inference speed and detection accuracy of model.Second,we design a transformer encoder with Cascaded Group Attention(CGA Encoder)to enrich the feature information of small targets in SAR images,which makes detection of small-sized ships more precise.Third,an efficient cross-scale feature fusion pyramid module(C3Het-FPN)is proposed through the lightweight units(C3Het)and the introduction of the weighted bidirectional feature pyramid(BiFPN)structure,which realizes the adaptive fusion of multi-scale features with fewer parameters.Ablation experiments and comparative experiments demonstrate the effectiveness of LSD-DETR.The model parameter of LSD-DETR is 8.8 M(only 20.6%of DETR),the model’s FPS reached 43.1,the average detection accuracy mAP50 on the SSDD and HRSID datasets reached 97.3%and 93.4%.Compared to advanced methods,the LSD-DETR can attain superior precision with fewer parameters,which enables accurate real-time object detection of multi-scale ships in SAR images.
基金supported by the Research Center for Chengdu History and Chengdu Literature[CLWX24004]the Centre for Southeast Asia Economic and Culture Studies[DNY2415]the Sichuan Landscape and Recreation Research Center[JGYQ2025027].
文摘Chengdu teahouses,as core public spaces in marketplace society,have undergone transformative reconstruction-from“containers of everyday life”to“containers of commercial traffic and digital flows”-during the process of modernization.Employing spatial archaeology as a methodology,combined with fieldwork and analysis of historical documents,this study systematically examines the diachronic evolution of architectural forms,functional orientations,and social networks within Chengdu teahouses.The study reveals the logic of spatial reconstruction under the interplay of multiple forces,including cultural heritage preservation,capital-driven development,and technological intervention.The findings identify three paradigms of spatial transformation in teahouses.First,heritage specimenization,which reinforces the continuity of collective memory through symbolic extraction but risks diminishing the vitality of everyday social interactions.Second,consumption upgrading,which caters to the demands of emerging groups through iterative business models yet necessitates vigilance against spatial differentiation eroding marketplace inclusivity.Third,digital parasitism,which expands communicative dimensions through technological empowerment but confronts the risk of flattening localized knowledge.These paradigms reflect both adaptive responses of traditional spaces to contemporary pressure and the tension of reconstruction imposed by instrumental rationality on marketplace networks.The study demonstrates that spatial transformation in Chengdu teahouses is not unidirectional alienation but rather a multifaceted configuration where the continuity of tradition coexists with innovative practices amid functional diversification.This research advocates for striking a balance between the preservation of traditional spaces and modern renewal and explores organic integration approaches for traditional and modern elements,thereby providing a theoretical framework and practical insights for the transformation of traditional public spaces.
文摘Recent years have seen a surge in interest in object detection on remote sensing images for applications such as surveillance andmanagement.However,challenges like small object detection,scale variation,and the presence of closely packed objects in these images hinder accurate detection.Additionally,the motion blur effect further complicates the identification of such objects.To address these issues,we propose enhanced YOLOv9 with a transformer head(YOLOv9-TH).The model introduces an additional prediction head for detecting objects of varying sizes and swaps the original prediction heads for transformer heads to leverage self-attention mechanisms.We further improve YOLOv9-TH using several strategies,including data augmentation,multi-scale testing,multi-model integration,and the introduction of an additional classifier.The cross-stage partial(CSP)method and the ghost convolution hierarchical graph(GCHG)are combined to improve detection accuracy by better utilizing feature maps,widening the receptive field,and precisely extracting multi-scale objects.Additionally,we incorporate the E-SimAM attention mechanism to address low-resolution feature loss.Extensive experiments on the VisDrone2021 and DIOR datasets demonstrate the effectiveness of YOLOv9-TH,showing good improvement in mAP compared to the best existing methods.The YOLOv9-TH-e achieved 54.2% of mAP50 on the VisDrone2021 dataset and 92.3% of mAP on the DIOR dataset.The results confirmthemodel’s robustness and suitability for real-world applications,particularly for small object detection in remote sensing images.
基金the Key Research and Development Program of Hainan Province(Grant Nos.ZDYF2023GXJS163,ZDYF2024GXJS014)National Natural Science Foundation of China(NSFC)(Grant Nos.62162022,62162024)+2 种基金the Major Science and Technology Project of Hainan Province(Grant No.ZDKJ2020012)Hainan Provincial Natural Science Foundation of China(Grant No.620MS021)Youth Foundation Project of Hainan Natural Science Foundation(621QN211).
文摘Accurately identifying small objects in high-resolution aerial images presents a complex and crucial task in thefield of small object detection on unmanned aerial vehicles(UAVs).This task is challenging due to variations inUAV flight altitude,differences in object scales,as well as factors like flight speed and motion blur.To enhancethe detection efficacy of small targets in drone aerial imagery,we propose an enhanced You Only Look Onceversion 7(YOLOv7)algorithm based on multi-scale spatial context.We build the MSC-YOLO model,whichincorporates an additional prediction head,denoted as P2,to improve adaptability for small objects.We replaceconventional downsampling with a Spatial-to-Depth Convolutional Combination(CSPDC)module to mitigatethe loss of intricate feature details related to small objects.Furthermore,we propose a Spatial Context Pyramidwith Multi-Scale Attention(SCPMA)module,which captures spatial and channel-dependent features of smalltargets acrossmultiple scales.This module enhances the perception of spatial contextual features and the utilizationof multiscale feature information.On the Visdrone2023 and UAVDT datasets,MSC-YOLO achieves remarkableresults,outperforming the baseline method YOLOv7 by 3.0%in terms ofmean average precision(mAP).The MSCYOLOalgorithm proposed in this paper has demonstrated satisfactory performance in detecting small targets inUAV aerial photography,providing strong support for practical applications.
文摘针对无人机场景下行人重识别所呈现的多视角多尺度特点,以及传统的基于卷积神经网络的行人重识别算法受限于局部感受野结构和下采样操作,很难对行人图像的全局特征进行提取且图像空间特征分辨率不高。提出一种无人机场景下基于Transformer的轻量化行人重识别(Lightweight Transformer-based Person Re-Identification,LTReID)算法,利用多头多注意力机制从全局角度提取人体不同部分特征,使用Circle损失和边界样本挖掘损失,以提高图像特征提取和细粒度图像检索性能,并利用快速掩码搜索剪枝算法对Transformer模型进行训练后轻量化,以提高模型的无人机平台部署能力。更进一步,提出一种可学习的面向无人机场景的空间信息嵌入,在训练过程中通过学习获得优化的非视觉信息,以提取无人机多视角下行人的不变特征,提升行人特征识别的鲁棒性。最后,在实际的无人机行人重识别数据库中,讨论了在不同量级主干网和不同剪枝率情况下所提LTReID算法的行人重识别性能,并与多种行人重识别算法进行了性能对比,结果表明了所提算法的有效性和优越性。