Deformable retinal image registration is crucial in clinical diagnosis and longitudinal studies of retinal diseases.Most existing deep deformable retinal image registration methods focus on fully convolutional network...Deformable retinal image registration is crucial in clinical diagnosis and longitudinal studies of retinal diseases.Most existing deep deformable retinal image registration methods focus on fully convolutional network(FCN)architecture design,which fails to model long-range dependencies among pixels-a significant factor in deformable retinal image registration.Transformers based on the self-attention mechanism,can capture global context dependencies,complementing local convolution.However,multi-scale spatial feature fusion and pixel-wise position selection are also crucial for the deformable retinal image registration,are often ignored by both FCNs and transformers.To fully leverage the merits of FCNs,multi-scale spatial attention and transformers,we propose a hierarchical hybrid architecture,reparameterized multi-scale transformer(RMFormer),for deformable retinal image registration.In RMFormer,we specifically develop a reparameterized multi-scale spatial attention to adaptively fuse multi-scale spatial features,with the assistance of the reparameterizing technique,thereby highlighting informative pixel-wise positions in a lightweight manner.The experimental results on two publicly available datasets demonstrate the superiority of our RMFormer over state-of-the-art methods and show that it is data-efficient in a limited medical image regime.Additionally,we are the first to provide a visualization analysis to explain how our proposed method affects the deformable retinal image registration process.The source code of our work is available at https://github.com/Tloops/RMFormer.展开更多
This paper introduces a novel method for medical image retrieval and classification by integrating a multi-scale encoding mechanism with Vision Transformer(ViT)architectures and a dynamic multi-loss function.The multi...This paper introduces a novel method for medical image retrieval and classification by integrating a multi-scale encoding mechanism with Vision Transformer(ViT)architectures and a dynamic multi-loss function.The multi-scale encoding significantly enhances the model’s ability to capture both fine-grained and global features,while the dynamic loss function adapts during training to optimize classification accuracy and retrieval performance.Our approach was evaluated on the ISIC-2018 and ChestX-ray14 datasets,yielding notable improvements.Specifically,on the ISIC-2018 dataset,our method achieves an F1-Score improvement of+4.84% compared to the standard ViT,with a precision increase of+5.46% for melanoma(MEL).On the ChestX-ray14 dataset,the method delivers an F1-Score improvement of 5.3%over the conventional ViT,with precision gains of+5.0% for pneumonia(PNEU)and+5.4%for fibrosis(FIB).Experimental results demonstrate that our approach outperforms traditional CNN-based models and existing ViT variants,particularly in retrieving relevant medical cases and enhancing diagnostic accuracy.These findings highlight the potential of the proposedmethod for large-scalemedical image analysis,offering improved tools for clinical decision-making through superior classification and case comparison.展开更多
Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete v...Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete vessel segmentation and poor continuity.In this study,we propose CT-MFENet to address the aforementioned issues.First,the use of context transformer(CT)allows for the integration of contextual feature information,which helps establish the connection between pixels and solve the problem of incomplete vessel continuity.Second,multi-scale dense residual networks are used instead of traditional CNN to address the issue of inadequate local feature extraction when the model encounters vessels at multiple scales.In the decoding stage,we introduce a local-global fusion module.It enhances the localization of vascular information and reduces the semantic gap between high-and low-level features.To address the class imbalance in retinal images,we propose a hybrid loss function that enhances the segmentation ability of the model for topological structures.We conducted experiments on the publicly available DRIVE,CHASEDB1,STARE,and IOSTAR datasets.The experimental results show that our CT-MFENet performs better than most existing methods,including the baseline U-Net.展开更多
Remote sensing plays a pivotal role in environmental monitoring,disaster relief,and urban planning,where accurate scene classification of aerial images is essential.However,conventional convolutional neural networks(C...Remote sensing plays a pivotal role in environmental monitoring,disaster relief,and urban planning,where accurate scene classification of aerial images is essential.However,conventional convolutional neural networks(CNNs)struggle with long-range dependencies and preserving high-resolution features,limiting their effectiveness in complex aerial image analysis.To address these challenges,we propose a Hybrid HRNet-Swin Transformer model that synergizes the strengths of HRNet-W48 for high-resolution segmentation and the Swin Transformer for global feature extraction.This hybrid architecture ensures robust multi-scale feature fusion,capturing fine-grained details and broader contextual relationships in aerial imagery.Our methodology begins with preprocessing steps,including normalization,histogram equalization,and noise reduction,to enhance input data quality.The HRNet-W48 backbone maintains high-resolution feature maps throughout the network,enabling precise segmentation,while the Swin Transformer leverages hierarchical self-attention to model long-range dependencies efficiently.By integrating these components,our model achieves superior performance in segmentation and classification tasks compared to traditional CNNs and standalone transformer models.We evaluate our approach on two benchmark datasets:UC Merced and WHU-RS19.Experimental results demonstrate that the proposed hybrid model outperforms existing methods,achieving state-of-the-art accuracy while maintaining computational efficiency.Specifically,it excels in preserving fine spatial details and contextual understanding,critical for applications like land-use classification and disaster assessment.展开更多
The Pressure Sensitive Paint Technique(PSP)has gained attention in recent years because of its significant benefits in measuring surface pressure on wind tunnel models.However,in the post-processing process of PSP ima...The Pressure Sensitive Paint Technique(PSP)has gained attention in recent years because of its significant benefits in measuring surface pressure on wind tunnel models.However,in the post-processing process of PSP images,issues such as pressure taps,paint peeling,and contamination can lead to the loss of pressure data on the image,which seriously affects the subsequent calculation and analysis of pressure distribution.Therefore,image inpainting is particularly important in the post-processing process of PSP images.Deep learning offers new methods for PSP image inpainting,but some basic characteristics of convolutional neural networks(CNNs)may limit their ability to handle restoration tasks.By contrast,the self-attention mechanism in the transformer can efficiently model nonlocal relationships among input features by generating adaptive attention scores.As a result,we propose an efficient transformer network model for the PSP image inpainting task,named multi-scale dilated attention transformer(D-former).The model utilizes the redundancy of global dependencies modeling in Vision Transformers(ViTs)to introducemulti-scale dilated attention(MDA),thismechanism effectivelymodels the interaction between localized and sparse patches within the shifted window,achieving a better balance between computational complexity and receptive field.As a result,D-former allows efficient modeling of long-range features while using fewer parameters and lower computational costs.The experiments on two public datasets and the PSP dataset indicate that the method in this article performs better compared to several advancedmethods.Through the verification of real wind tunnel tests,thismethod can accurately restore the luminescent intensity data of holes in PSP images,thereby improving the accuracy of full field pressure data,and has a promising future in practical applications.展开更多
The capacity to diagnose faults in rolling bearings is of significant practical importance to ensure the normal operation of the equipment.Frequency-domain features can effectively enhance the identification of fault ...The capacity to diagnose faults in rolling bearings is of significant practical importance to ensure the normal operation of the equipment.Frequency-domain features can effectively enhance the identification of fault modes.However,existing methods often suffer from insufficient frequency-domain representation in practical applications,which greatly affects diagnostic performance.Therefore,this paper proposes a rolling bearing fault diagnosismethod based on aMulti-Scale FusionNetwork(MSFN)using the Time-Division Fourier Transform(TDFT).The method constructs multi-scale channels to extract time-domain and frequency-domain features of the signal in parallel.A multi-level,multi-scale filter-based approach is designed to extract frequency-domain features in a segmented manner.A cross-attention mechanism is introduced to facilitate the fusion of the extracted time-frequency domain features.The performance of the proposed method is validated using the CWRU and Ottawa datasets.The results show that the average accuracy of MSFN under complex noisy signals is 97.75%and 94.41%.The average accuracy under variable load conditions is 98.68%.This demonstrates its significant application potential compared to existing methods.展开更多
针对现有深度学习算法在壁画修复时,存在全局语义一致性约束不足及局部特征提取不充分,导致修复后的壁画易出现边界效应和细节模糊等问题,提出一种双向自回归Transformer与快速傅里叶卷积增强的壁画修复方法.首先,设计基于Transformer...针对现有深度学习算法在壁画修复时,存在全局语义一致性约束不足及局部特征提取不充分,导致修复后的壁画易出现边界效应和细节模糊等问题,提出一种双向自回归Transformer与快速傅里叶卷积增强的壁画修复方法.首先,设计基于Transformer结构的全局语义特征修复模块,利用双向自回归机制与掩码语言模型(masked language modeling,MLM),提出改进的多头注意力全局语义壁画修复模块,提高对全局语义特征的修复能力.然后,构建了由门控卷积和残差模块组成的全局语义增强模块,增强全局语义特征一致性约束.最后,设计局部细节修复模块,采用大核注意力机制(large kernel attention,LKA)与快速傅里叶卷积提高细节特征的捕获能力,同时减少局部细节信息的丢失,提升修复壁画局部和整体特征的一致性.通过对敦煌壁画数字化修复实验,结果表明,所提算法修复性能更优,客观评价指标均优于比较算法.展开更多
基金supported in part by the National Natural Science Foundation of China(No.82272086)the Leading Goose Program of Zhejiang,China(No.2023C03079)the Shenzhen Natural Science Fund,China(No.JCYJ20200109140820699).
文摘Deformable retinal image registration is crucial in clinical diagnosis and longitudinal studies of retinal diseases.Most existing deep deformable retinal image registration methods focus on fully convolutional network(FCN)architecture design,which fails to model long-range dependencies among pixels-a significant factor in deformable retinal image registration.Transformers based on the self-attention mechanism,can capture global context dependencies,complementing local convolution.However,multi-scale spatial feature fusion and pixel-wise position selection are also crucial for the deformable retinal image registration,are often ignored by both FCNs and transformers.To fully leverage the merits of FCNs,multi-scale spatial attention and transformers,we propose a hierarchical hybrid architecture,reparameterized multi-scale transformer(RMFormer),for deformable retinal image registration.In RMFormer,we specifically develop a reparameterized multi-scale spatial attention to adaptively fuse multi-scale spatial features,with the assistance of the reparameterizing technique,thereby highlighting informative pixel-wise positions in a lightweight manner.The experimental results on two publicly available datasets demonstrate the superiority of our RMFormer over state-of-the-art methods and show that it is data-efficient in a limited medical image regime.Additionally,we are the first to provide a visualization analysis to explain how our proposed method affects the deformable retinal image registration process.The source code of our work is available at https://github.com/Tloops/RMFormer.
基金funded by the Deanship of Research and Graduate Studies at King Khalid University through small group research under grant number RGP1/278/45.
文摘This paper introduces a novel method for medical image retrieval and classification by integrating a multi-scale encoding mechanism with Vision Transformer(ViT)architectures and a dynamic multi-loss function.The multi-scale encoding significantly enhances the model’s ability to capture both fine-grained and global features,while the dynamic loss function adapts during training to optimize classification accuracy and retrieval performance.Our approach was evaluated on the ISIC-2018 and ChestX-ray14 datasets,yielding notable improvements.Specifically,on the ISIC-2018 dataset,our method achieves an F1-Score improvement of+4.84% compared to the standard ViT,with a precision increase of+5.46% for melanoma(MEL).On the ChestX-ray14 dataset,the method delivers an F1-Score improvement of 5.3%over the conventional ViT,with precision gains of+5.0% for pneumonia(PNEU)and+5.4%for fibrosis(FIB).Experimental results demonstrate that our approach outperforms traditional CNN-based models and existing ViT variants,particularly in retrieving relevant medical cases and enhancing diagnostic accuracy.These findings highlight the potential of the proposedmethod for large-scalemedical image analysis,offering improved tools for clinical decision-making through superior classification and case comparison.
基金the National Natural Science Foundation of China(No.62266025)。
文摘Segmentation of the retinal vessels in the fundus is crucial for diagnosing ocular diseases.Retinal vessel images often suffer from category imbalance and large scale variations.This ultimately results in incomplete vessel segmentation and poor continuity.In this study,we propose CT-MFENet to address the aforementioned issues.First,the use of context transformer(CT)allows for the integration of contextual feature information,which helps establish the connection between pixels and solve the problem of incomplete vessel continuity.Second,multi-scale dense residual networks are used instead of traditional CNN to address the issue of inadequate local feature extraction when the model encounters vessels at multiple scales.In the decoding stage,we introduce a local-global fusion module.It enhances the localization of vascular information and reduces the semantic gap between high-and low-level features.To address the class imbalance in retinal images,we propose a hybrid loss function that enhances the segmentation ability of the model for topological structures.We conducted experiments on the publicly available DRIVE,CHASEDB1,STARE,and IOSTAR datasets.The experimental results show that our CT-MFENet performs better than most existing methods,including the baseline U-Net.
基金supported by the ITP(Institute of Information&Communications Technology Planning&Evaluation)-ICAN(ICT Challenge and Advanced Network of HRD)(ITP-2025-RS-2022-00156326,33)grant funded by the Korea government(Ministry of Science and ICT)the Deanship of Research and Graduate Studies at King Khalid University for funding this work through the Large Group Project under grant number(RGP2/568/45)the Deanship of Scientific Research at Northern Border University,Arar,Saudi Arabia,for funding this research work through the Project Number"NBU-FFR-2025-231-03".
文摘Remote sensing plays a pivotal role in environmental monitoring,disaster relief,and urban planning,where accurate scene classification of aerial images is essential.However,conventional convolutional neural networks(CNNs)struggle with long-range dependencies and preserving high-resolution features,limiting their effectiveness in complex aerial image analysis.To address these challenges,we propose a Hybrid HRNet-Swin Transformer model that synergizes the strengths of HRNet-W48 for high-resolution segmentation and the Swin Transformer for global feature extraction.This hybrid architecture ensures robust multi-scale feature fusion,capturing fine-grained details and broader contextual relationships in aerial imagery.Our methodology begins with preprocessing steps,including normalization,histogram equalization,and noise reduction,to enhance input data quality.The HRNet-W48 backbone maintains high-resolution feature maps throughout the network,enabling precise segmentation,while the Swin Transformer leverages hierarchical self-attention to model long-range dependencies efficiently.By integrating these components,our model achieves superior performance in segmentation and classification tasks compared to traditional CNNs and standalone transformer models.We evaluate our approach on two benchmark datasets:UC Merced and WHU-RS19.Experimental results demonstrate that the proposed hybrid model outperforms existing methods,achieving state-of-the-art accuracy while maintaining computational efficiency.Specifically,it excels in preserving fine spatial details and contextual understanding,critical for applications like land-use classification and disaster assessment.
基金partly supported by the National Natural Science Foundation of China under Grant 12202476,author Chunhua Wei,https://www.nsfc.gov.cn/.
文摘The Pressure Sensitive Paint Technique(PSP)has gained attention in recent years because of its significant benefits in measuring surface pressure on wind tunnel models.However,in the post-processing process of PSP images,issues such as pressure taps,paint peeling,and contamination can lead to the loss of pressure data on the image,which seriously affects the subsequent calculation and analysis of pressure distribution.Therefore,image inpainting is particularly important in the post-processing process of PSP images.Deep learning offers new methods for PSP image inpainting,but some basic characteristics of convolutional neural networks(CNNs)may limit their ability to handle restoration tasks.By contrast,the self-attention mechanism in the transformer can efficiently model nonlocal relationships among input features by generating adaptive attention scores.As a result,we propose an efficient transformer network model for the PSP image inpainting task,named multi-scale dilated attention transformer(D-former).The model utilizes the redundancy of global dependencies modeling in Vision Transformers(ViTs)to introducemulti-scale dilated attention(MDA),thismechanism effectivelymodels the interaction between localized and sparse patches within the shifted window,achieving a better balance between computational complexity and receptive field.As a result,D-former allows efficient modeling of long-range features while using fewer parameters and lower computational costs.The experiments on two public datasets and the PSP dataset indicate that the method in this article performs better compared to several advancedmethods.Through the verification of real wind tunnel tests,thismethod can accurately restore the luminescent intensity data of holes in PSP images,thereby improving the accuracy of full field pressure data,and has a promising future in practical applications.
基金fully supported by the Frontier Exploration Projects of Longmen Laboratory(No.LMQYTSKT034)Key Research and Development and Promotion of Special(Science and Technology)Project of Henan Province,China(No.252102210158)。
文摘The capacity to diagnose faults in rolling bearings is of significant practical importance to ensure the normal operation of the equipment.Frequency-domain features can effectively enhance the identification of fault modes.However,existing methods often suffer from insufficient frequency-domain representation in practical applications,which greatly affects diagnostic performance.Therefore,this paper proposes a rolling bearing fault diagnosismethod based on aMulti-Scale FusionNetwork(MSFN)using the Time-Division Fourier Transform(TDFT).The method constructs multi-scale channels to extract time-domain and frequency-domain features of the signal in parallel.A multi-level,multi-scale filter-based approach is designed to extract frequency-domain features in a segmented manner.A cross-attention mechanism is introduced to facilitate the fusion of the extracted time-frequency domain features.The performance of the proposed method is validated using the CWRU and Ottawa datasets.The results show that the average accuracy of MSFN under complex noisy signals is 97.75%and 94.41%.The average accuracy under variable load conditions is 98.68%.This demonstrates its significant application potential compared to existing methods.
文摘针对现有深度学习算法在壁画修复时,存在全局语义一致性约束不足及局部特征提取不充分,导致修复后的壁画易出现边界效应和细节模糊等问题,提出一种双向自回归Transformer与快速傅里叶卷积增强的壁画修复方法.首先,设计基于Transformer结构的全局语义特征修复模块,利用双向自回归机制与掩码语言模型(masked language modeling,MLM),提出改进的多头注意力全局语义壁画修复模块,提高对全局语义特征的修复能力.然后,构建了由门控卷积和残差模块组成的全局语义增强模块,增强全局语义特征一致性约束.最后,设计局部细节修复模块,采用大核注意力机制(large kernel attention,LKA)与快速傅里叶卷积提高细节特征的捕获能力,同时减少局部细节信息的丢失,提升修复壁画局部和整体特征的一致性.通过对敦煌壁画数字化修复实验,结果表明,所提算法修复性能更优,客观评价指标均优于比较算法.