Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global featu...Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global feature.The transformer can extract the global information well but adapting it to small medical datasets is challenging and its computational complexity can be heavy.In this work,a serial and parallel network is proposed for the accurate 3D medical image segmentation by combining CNN and transformer and promoting feature interactions across various semantic levels.The core components of the proposed method include the cross window self-attention based transformer(CWST)and multi-scale local enhanced(MLE)modules.The CWST module enhances the global context understanding by partitioning 3D images into non-overlapping windows and calculating sparse global attention between windows.The MLE module selectively fuses features by computing the voxel attention between different branch features,and uses convolution to strengthen the dense local information.The experiments on the prostate,atrium,and pancreas MR/CT image datasets consistently demonstrate the advantage of the proposed method over six popular segmentation models in both qualitative evaluation and quantitative indexes such as dice similarity coefficient,Intersection over Union,95%Hausdorff distance and average symmetric surface distance.展开更多
In contemporary computer vision,convolutional neural networks(CNNs)and vision transformers(ViTs)represent the two primary architectural paradigms for image recognition.While both approaches have been widely adopted in...In contemporary computer vision,convolutional neural networks(CNNs)and vision transformers(ViTs)represent the two primary architectural paradigms for image recognition.While both approaches have been widely adopted in medical imaging applications,they operate based on fundamentally different computational principles.This report attempts to provide brief application notes on ViTs and CNNs,particularly focusing on scenarios that guide the selection of one architecture over the other in practical medical implementations.Generally,CNNs rely on convolutional kernels,localized receptive fields,and weight sharing,enabling efficient hierarchical feature extraction.These properties contribute to strong performance in detecting spatially constrained patterns such as textures,edges,and anatomical boundaries,while maintaining relatively low computational requirements.ViTs,on the other hand,decompose images into smaller segments referred to as tokens and employ self-attention mechanisms to model relationships across the entire image.This global modeling capability allows ViTs to capture long-range dependencies that may be difficult for convolution-based architectures to learn.However,ViTs typically achieve optimal performance when trained on extremely large datasets or when supported by extensive pretraining,as their reduced inductive bias requires greater data exposure to learn robust representations.This report briefly examines the architectural structure,underlying mathematical foundations,and relative performance characteristics of CNNs and ViTs,drawing upon recent findings from contemporary research.Emphasis is placed on understanding how differences in data availability,computational resources,and task requirements influence model effectiveness across medical imaging domains.Most importantly,the report serves as a concise application guide for practitioners seeking informed implementation decisions between these two influential deep learning frameworks.展开更多
critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study pr...critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study proposes a non-invasive approach using digital mammography images and deep learning algorithm for classifying breast cancer molecular subtypes.Four pretrained models,including two Convolutional Neural Networks(MobileNet_V3_Large and VGG-16)and two Vision Transformers(ViT_B_16 and ViT_Base_Patch16_Clip_224)were fine-tuned to classify images into HER2-enriched,Luminal,Normal-like,and Triple Negative subtypes.Hyperparameter tuning,including learning rate adjustment and layer freezing strategies,was applied to optimize performance.Among the evaluated models,ViT_Base_Patch16_Clip_224 achieved the highest test accuracy(94.44%),with equally high precision,recall,and F1-score of 0.94,demonstrating excellent generalization.MobileNet_V3_Large achieved the same accuracy but showed less training stability.In contrast,VGG-16 recorded the lowest performance,indicating a limitation in its generalizability for this classification task.The study also highlighted the superior performance of the Vision Transformer models over CNNs,particularly due to their ability to capture global contextual features and the benefit of CLIP-based pretraining in ViT_Base_Patch16_Clip_224.To enhance clinical applicability,a graphical user interface(GUI)named“BCMS Dx”was developed for streamlined subtype prediction.Deep learning applied to mammography has proven effective for accurate and non-invasive molecular subtyping.The proposed Vision Transformer-based model and supporting GUI offer a promising direction for augmenting diagnostic workflows,minimizing the need for invasive procedures,and advancing personalized breast cancer management.展开更多
在低光环境下,人脸识别面临图像质量低、特征模糊等诸多挑战,导致现有方法难以提取鲁棒且辨识度高的特征,从而严重影响识别性能。为应对这一问题,提出了一种新颖的非成对低光人脸识别模型LFSepNet(low-light face separation network)...在低光环境下,人脸识别面临图像质量低、特征模糊等诸多挑战,导致现有方法难以提取鲁棒且辨识度高的特征,从而严重影响识别性能。为应对这一问题,提出了一种新颖的非成对低光人脸识别模型LFSepNet(low-light face separation network)。与传统基于卷积神经网络(convolutional neural network,CNN)架构的训练方法不同,LFSepNet采用Transformer架构,更有效地捕捉长距离依赖关系,从而克服卷积神经网络在局部感受野上的限制。由于低光环境下的人脸图像往往整体偏暗,仅有少数区域可能包含较丰富的照明信息,传统CNN在特征提取时容易受限于局部区域,难以充分利用这些关键信息。相比之下,Transformer通过自注意力机制实现全局信息建模,使网络能够更全面地整合亮度不均的人脸图像信息,从而提升特征解耦的效果和低光人脸识别的准确性。LFSepNet模型包含自适应亮度分离模块和自适应照明间隙损失,通过动态分离人脸与光照特征,减少光照干扰,同时进一步优化特征分离效果,使模型能够提取更加精确和鲁棒的特征。实验结果表明,LFSepNet在多个低光人脸数据集上的性能均优于现有方法,特别是在极端低光条件下,其识别精度显著提升。该研究为低光人脸识别提供了基于非成对设置的有效解决方案,并在实际应用中展现了良好的潜力。展开更多
基金National Key Research and Development Program of China,Grant/Award Number:2018YFE0206900China Postdoctoral Science Foundation,Grant/Award Number:2023M731204+2 种基金The Open Project of Key Laboratory for Quality Evaluation of Ultrasound Surgical Equipment of National Medical Products Administration,Grant/Award Number:SMDTKL-2023-1-01The Hubei Province Key Research and Development Project,Grant/Award Number:2023BCB007CAAI-Huawei MindSpore Open Fund。
文摘Convolutional neural network(CNN)with the encoder-decoder structure is popular in medical image segmentation due to its excellent local feature extraction ability but it faces limitations in capturing the global feature.The transformer can extract the global information well but adapting it to small medical datasets is challenging and its computational complexity can be heavy.In this work,a serial and parallel network is proposed for the accurate 3D medical image segmentation by combining CNN and transformer and promoting feature interactions across various semantic levels.The core components of the proposed method include the cross window self-attention based transformer(CWST)and multi-scale local enhanced(MLE)modules.The CWST module enhances the global context understanding by partitioning 3D images into non-overlapping windows and calculating sparse global attention between windows.The MLE module selectively fuses features by computing the voxel attention between different branch features,and uses convolution to strengthen the dense local information.The experiments on the prostate,atrium,and pancreas MR/CT image datasets consistently demonstrate the advantage of the proposed method over six popular segmentation models in both qualitative evaluation and quantitative indexes such as dice similarity coefficient,Intersection over Union,95%Hausdorff distance and average symmetric surface distance.
文摘In contemporary computer vision,convolutional neural networks(CNNs)and vision transformers(ViTs)represent the two primary architectural paradigms for image recognition.While both approaches have been widely adopted in medical imaging applications,they operate based on fundamentally different computational principles.This report attempts to provide brief application notes on ViTs and CNNs,particularly focusing on scenarios that guide the selection of one architecture over the other in practical medical implementations.Generally,CNNs rely on convolutional kernels,localized receptive fields,and weight sharing,enabling efficient hierarchical feature extraction.These properties contribute to strong performance in detecting spatially constrained patterns such as textures,edges,and anatomical boundaries,while maintaining relatively low computational requirements.ViTs,on the other hand,decompose images into smaller segments referred to as tokens and employ self-attention mechanisms to model relationships across the entire image.This global modeling capability allows ViTs to capture long-range dependencies that may be difficult for convolution-based architectures to learn.However,ViTs typically achieve optimal performance when trained on extremely large datasets or when supported by extensive pretraining,as their reduced inductive bias requires greater data exposure to learn robust representations.This report briefly examines the architectural structure,underlying mathematical foundations,and relative performance characteristics of CNNs and ViTs,drawing upon recent findings from contemporary research.Emphasis is placed on understanding how differences in data availability,computational resources,and task requirements influence model effectiveness across medical imaging domains.Most importantly,the report serves as a concise application guide for practitioners seeking informed implementation decisions between these two influential deep learning frameworks.
基金funded by the Ministry of Higher Education(MoHE)Malaysia through the Fundamental Research Grant Scheme—Early Career Researcher(FRGS-EC),grant number FRGSEC/1/2024/ICT02/UNIMAP/02/8.
文摘critical for guiding treatment and improving patient outcomes.Traditional molecular subtyping via immuno-histochemistry(IHC)test is invasive,time-consuming,and may not fully represent tumor heterogeneity.This study proposes a non-invasive approach using digital mammography images and deep learning algorithm for classifying breast cancer molecular subtypes.Four pretrained models,including two Convolutional Neural Networks(MobileNet_V3_Large and VGG-16)and two Vision Transformers(ViT_B_16 and ViT_Base_Patch16_Clip_224)were fine-tuned to classify images into HER2-enriched,Luminal,Normal-like,and Triple Negative subtypes.Hyperparameter tuning,including learning rate adjustment and layer freezing strategies,was applied to optimize performance.Among the evaluated models,ViT_Base_Patch16_Clip_224 achieved the highest test accuracy(94.44%),with equally high precision,recall,and F1-score of 0.94,demonstrating excellent generalization.MobileNet_V3_Large achieved the same accuracy but showed less training stability.In contrast,VGG-16 recorded the lowest performance,indicating a limitation in its generalizability for this classification task.The study also highlighted the superior performance of the Vision Transformer models over CNNs,particularly due to their ability to capture global contextual features and the benefit of CLIP-based pretraining in ViT_Base_Patch16_Clip_224.To enhance clinical applicability,a graphical user interface(GUI)named“BCMS Dx”was developed for streamlined subtype prediction.Deep learning applied to mammography has proven effective for accurate and non-invasive molecular subtyping.The proposed Vision Transformer-based model and supporting GUI offer a promising direction for augmenting diagnostic workflows,minimizing the need for invasive procedures,and advancing personalized breast cancer management.