Despite its remarkable performance on natural images,the segment anything model(SAM)lacks domain-specific information in medical imaging.and faces the challenge of losing local multi-scale information in the encoding ...Despite its remarkable performance on natural images,the segment anything model(SAM)lacks domain-specific information in medical imaging.and faces the challenge of losing local multi-scale information in the encoding phase.This paper presents a medical image segmentation model based on SAM with a local multi-scale feature encoder(LMSFE-SAM)to address the issues above.Firstly,based on the SAM,a local multi-scale feature encoder is introduced to improve the representation of features within local receptive field,thereby supplying the Vision Transformer(ViT)branch in SAM with enriched local multi-scale contextual information.At the same time,a multiaxial Hadamard product module(MHPM)is incorporated into the local multi-scale feature encoder in a lightweight manner to reduce the quadratic complexity and noise interference.Subsequently,a cross-branch balancing adapter is designed to balance the local and global information between the local multi-scale feature encoder and the ViT encoder in SAM.Finally,to obtain smaller input image size and to mitigate overlapping in patch embeddings,the size of the input image is reduced from 1024×1024 pixels to 256×256 pixels,and a multidimensional information adaptation component is developed,which includes feature adapters,position adapters,and channel-spatial adapters.This component effectively integrates the information from small-sized medical images into SAM,enhancing its suitability for clinical deployment.The proposed model demonstrates an average enhancement ranging from 0.0387 to 0.3191 across six objective evaluation metrics on BUSI,DDTI,and TN3K datasets compared to eight other representative image segmentation models.This significantly enhances the performance of the SAM on medical images,providing clinicians with a powerful tool in clinical diagnosis.展开更多
The use of AI technologies in remote sensing(RS)tasks has been the focus of many individuals in both the professional and academic domains.Having more accessible interfaces and tools that allow people of little or no ...The use of AI technologies in remote sensing(RS)tasks has been the focus of many individuals in both the professional and academic domains.Having more accessible interfaces and tools that allow people of little or no experience to intuitively interact with RS data of multiple formats is a potential provided by this integration.However,the use of AI and AI agents to help automate RS-related tasks is still in its infancy stage,with some frameworks and interfaces built on top of well-known vision language models(VLM)such as GPT-4,segment anything model(SAM),and grounding DINO.These tools do promise and draw guidelines on the potentials and limitations of existing solutions concerning the use of said models.In this work,the state of the art AI foundation models(FM)are reviewed and used in a multi-modal manner to ingest RS imagery input and perform zero-shot object detection using natural language.The natural language input is then used to define the classes or labels the model should look for,then,both inputs are fed to the pipeline.The pipeline presented in this work makes up for the shortcomings of the general knowledge FMs by stacking pre-processing and post-processing applications on top of the FMs;these applications include tiling to produce uniform patches of the original image for faster detection,outlier rejection of redundant bounding boxes using statistical and machine learning methods.The pipeline was tested with UAV,aerial and satellite images taken over multiple areas.The accuracy for the semantic segmentation showed improvement from the original 64%to approximately 80%-99%by utilizing the pipeline and techniques proposed in this work.GitHub Repository:MohanadDiab/LangRS.展开更多
Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset.Initially,data augmentation mainly involved some simple transformations of images.Later,in ord...Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset.Initially,data augmentation mainly involved some simple transformations of images.Later,in order to increase the diversity and complexity of data,more advanced methods appeared and evolved to sophisticated generative models.However,these methods required a mass of computation of training or searching.In this paper,a novel training-free method that utilises the Pre-Trained Segment Anything Model(SAM)model as a data augmentation tool(PTSAM-DA)is proposed to generate the augmented annotations for images.Without the need for training,it obtains prompt boxes from the original annotations and then feeds the boxes to the pre-trained SAM to generate diverse and improved annotations.In this way,annotations are augmented more ingenious than simple manipulations without incurring huge computation for training a data augmentation model.Multiple comparative experiments on three datasets are conducted,including an in-house dataset,ADE20K and COCO2017.On this in-house dataset,namely Agricultural Plot Segmentation Dataset,maximum improvements of 3.77%and 8.92%are gained in two mainstream metrics,mIoU and mAcc,respectively.Consequently,large vision models like SAM are proven to be promising not only in image segmentation but also in data augmentation.展开更多
In this paper,we introduce an innovative method for computer-aided design(CAD)segmentation by concatenating meshes and CAD models.Many previous CAD segmentation methods have achieved impressive performance using singl...In this paper,we introduce an innovative method for computer-aided design(CAD)segmentation by concatenating meshes and CAD models.Many previous CAD segmentation methods have achieved impressive performance using single representations,such as meshes,CAD,and point clouds.However,existing methods cannot effectively combine different three-dimensional model types for the direct conversion,alignment,and integrity maintenance of geometric and topological information.Hence,we propose an integration approach that combines the geometric accuracy of CAD data with the flexibility of mesh representations,as well as introduce a unique hybrid representation that combines CAD and mesh models to enhance segmentation accuracy.To combine these two model types,our hybrid system utilizes advanced-neural-network techniques to convert CAD models into mesh models.For complex CAD models,model segmentation is crucial for model retrieval and reuse.In partial retrieval,it aims to segment a complex CAD model into several simple components.The first component of our hybrid system involves advanced mesh-labeling algorithms that harness the digitization of CAD properties to mesh models.The second component integrates labelled face features for CAD segmentation by leveraging the abundant multisemantic information embedded in CAD models.This combination of mesh and CAD not only refines the accuracy of boundary delineation but also provides a comprehensive understanding of the underlying object semantics.This study uses the Fusion 360 Gallery dataset.Experimental results indicate that our hybrid method can segment these models with higher accuracy than other methods that use single representations.展开更多
Objective This study aimed to explore a novel method that integrates the segmentation guidance classification and the dif-fusion model augmentation to realize the automatic classification for tibial plateau fractures(...Objective This study aimed to explore a novel method that integrates the segmentation guidance classification and the dif-fusion model augmentation to realize the automatic classification for tibial plateau fractures(TPFs).Methods YOLOv8n-cls was used to construct a baseline model on the data of 3781 patients from the Orthopedic Trauma Center of Wuhan Union Hospital.Additionally,a segmentation-guided classification approach was proposed.To enhance the dataset,a diffusion model was further demonstrated for data augmentation.Results The novel method that integrated the segmentation-guided classification and diffusion model augmentation sig-nificantly improved the accuracy and robustness of fracture classification.The average accuracy of classification for TPFs rose from 0.844 to 0.896.The comprehensive performance of the dual-stream model was also significantly enhanced after many rounds of training,with both the macro-area under the curve(AUC)and the micro-AUC increasing from 0.94 to 0.97.By utilizing diffusion model augmentation and segmentation map integration,the model demonstrated superior efficacy in identifying SchatzkerⅠ,achieving an accuracy of 0.880.It yielded an accuracy of 0.898 for SchatzkerⅡandⅢand 0.913 for SchatzkerⅣ;for SchatzkerⅤandⅥ,the accuracy was 0.887;and for intercondylar ridge fracture,the accuracy was 0.923.Conclusion The dual-stream attention-based classification network,which has been verified by many experiments,exhibited great potential in predicting the classification of TPFs.This method facilitates automatic TPF assessment and may assist surgeons in the rapid formulation of surgical plans.展开更多
Segment Anything Model(SAM)is a cutting-edge model that has shown impressive performance in general object segmentation.The birth of the segment anything is a groundbreaking step towards creating a universal intellige...Segment Anything Model(SAM)is a cutting-edge model that has shown impressive performance in general object segmentation.The birth of the segment anything is a groundbreaking step towards creating a universal intelligent model.Due to its superior performance in general object segmentation,it quickly gained attention and interest.This makes SAM particularly attractive in industrial surface defect segmentation,especially for complex industrial scenes with limited training data.However,its segmentation ability for specific industrial scenes remains unknown.Therefore,in this work,we select three representative and complex industrial surface defect detection scenarios,namely strip steel surface defects,tile surface defects,and rail surface defects,to evaluate the segmentation performance of SAM.Our results show that although SAM has great potential in general object segmentation,it cannot achieve satisfactory performance in complex industrial scenes.Our test results are available at:https://github.com/VDT-2048/SAM-IS.展开更多
Computed Tomography(CT)is a commonly used technology in Printed Circuit Boards(PCB)non-destructive testing,and element segmentation of CT images is a key subsequent step.With the development of deep learning,researche...Computed Tomography(CT)is a commonly used technology in Printed Circuit Boards(PCB)non-destructive testing,and element segmentation of CT images is a key subsequent step.With the development of deep learning,researchers began to exploit the“pre-training and fine-tuning”training process for multi-element segmentation,reducing the time spent on manual annotation.However,the existing element segmentation model only focuses on the overall accuracy at the pixel level,ignoring whether the element connectivity relationship can be correctly identified.To this end,this paper proposes a PCB CT image element segmentation model optimizing the semantic perception of connectivity relationship(OSPC-seg).The overall training process adopts a“pre-training and fine-tuning”training process.A loss function that optimizes the semantic perception of circuit connectivity relationship(OSPC Loss)is designed from the aspect of alleviating the class imbalance problem and improving the correct connectivity rate.Also,the correct connectivity rate index(CCR)is proposed to evaluate the model’s connectivity relationship recognition capabilities.Experiments show that mIoU and CCR of OSPC-seg on our datasets are 90.1%and 97.0%,improved by 1.5%and 1.6%respectively compared with the baseline model.From visualization results,it can be seen that the segmentation performance of connection positions is significantly improved,which also demonstrates the effectiveness of OSPC-seg.展开更多
In this paper,we consider the Chan–Vese(C-V)model for image segmentation and obtain its numerical solution accurately and efficiently.For this purpose,we present a local radial basis function method based on a Gaussi...In this paper,we consider the Chan–Vese(C-V)model for image segmentation and obtain its numerical solution accurately and efficiently.For this purpose,we present a local radial basis function method based on a Gaussian kernel(GA-LRBF)for spatial discretization.Compared to the standard radial basis functionmethod,this approach consumes less CPU time and maintains good stability because it uses only a small subset of points in the whole computational domain.Additionally,since the Gaussian function has the property of dimensional separation,the GA-LRBF method is suitable for dealing with isotropic images.Finally,a numerical scheme that couples GA-LRBF with the fourth-order Runge–Kutta method is applied to the C-V model,and a comparison of some numerical results demonstrates that this scheme achieves much more reliable image segmentation.展开更多
语义分割技术能够对复杂、多元的场景实现细粒度理解,是促进无人系统高效、智能工作的关键技术之一.大规模无监督语义分割旨在从大规模未标记图像中学习语义分割能力.然而,现有方法由于自学习伪标签存在类别混淆和形状表示欠佳的问题,...语义分割技术能够对复杂、多元的场景实现细粒度理解,是促进无人系统高效、智能工作的关键技术之一.大规模无监督语义分割旨在从大规模未标记图像中学习语义分割能力.然而,现有方法由于自学习伪标签存在类别混淆和形状表示欠佳的问题,导致最终分割精度较低.为此,本文提出一种伪标签去噪和SAM优化(Pseudo-label Denoising and SAM Optimization,PDSO)方法以解决大规模无监督语义分割问题.本文设计了一种基于去噪的特征微调模块,在基于小损失准则从大规模数据集中筛选出具有干净图像级伪标签的潜在样本后,利用这些干净样本对预训练的主干网络进行微调,使网络获得更稳健的类别表示.为了进一步减少伪标签中的类别噪声,设计了一种基于聚类的样本去噪模块,根据类别占比和样本与聚类中心之间的距离来去除干扰聚类任务的噪声样本,从而提升聚类性能.本文还设计了一种SAM提示优化模块,根据聚类距离识别出图像中的活跃类别,以过滤噪声目标,并将点和框作为SAM的目标提示信息,生成预期的目标掩膜以细化伪标签中目标的边缘.实验结果表明,在大规模语义分割数据集ImageNet-S_(50)、ImageNet-S_(300)和ImageNet-S_(919)的测试集上,本文方法在平均交并比指标上分别达到了45.0%、26.6%和14.5%,显著提高了分割目标的类别准确率和边缘精度.展开更多
Brain tumors present significant challenges in medical diagnosis and treatment,where early detection is crucial for reducing morbidity and mortality rates.This research introduces a novel deep learning model,the Progr...Brain tumors present significant challenges in medical diagnosis and treatment,where early detection is crucial for reducing morbidity and mortality rates.This research introduces a novel deep learning model,the Progressive Layered U-Net(PLU-Net),designed to improve brain tumor segmentation accuracy from Magnetic Resonance Imaging(MRI)scans.The PLU-Net extends the standard U-Net architecture by incorporating progressive layering,attention mechanisms,and multi-scale data augmentation.The progressive layering involves a cascaded structure that refines segmentation masks across multiple stages,allowing the model to capture features at different scales and resolutions.Attention gates within the convolutional layers selectively focus on relevant features while suppressing irrelevant ones,enhancing the model's ability to delineate tumor boundaries.Additionally,multi-scale data augmentation techniques increase the diversity of training data and boost the model's generalization capabilities.Evaluated on the BraTS 2021 dataset,the PLU-Net achieved state-of-the-art performance with a dice coefficient of 0.91,specificity of 0.92,sensitivity of 0.89,Hausdorff95 of 2.5,outperforming other modified U-Net architectures in segmentation accuracy.These results underscore the effectiveness of the PLU-Net in improving brain tumor segmentation from MRI scans,supporting clinicians in early diagnosis,treatment planning,and the development of new therapies.展开更多
基金supported by Natural Science Foundation Programme of Gansu Province(No.24JRRA231)National Natural Science Foundation of China(No.62061023)Gansu Provincial Science and Technology Plan Key Research and Development Program Project(No.24YFFA024).
文摘Despite its remarkable performance on natural images,the segment anything model(SAM)lacks domain-specific information in medical imaging.and faces the challenge of losing local multi-scale information in the encoding phase.This paper presents a medical image segmentation model based on SAM with a local multi-scale feature encoder(LMSFE-SAM)to address the issues above.Firstly,based on the SAM,a local multi-scale feature encoder is introduced to improve the representation of features within local receptive field,thereby supplying the Vision Transformer(ViT)branch in SAM with enriched local multi-scale contextual information.At the same time,a multiaxial Hadamard product module(MHPM)is incorporated into the local multi-scale feature encoder in a lightweight manner to reduce the quadratic complexity and noise interference.Subsequently,a cross-branch balancing adapter is designed to balance the local and global information between the local multi-scale feature encoder and the ViT encoder in SAM.Finally,to obtain smaller input image size and to mitigate overlapping in patch embeddings,the size of the input image is reduced from 1024×1024 pixels to 256×256 pixels,and a multidimensional information adaptation component is developed,which includes feature adapters,position adapters,and channel-spatial adapters.This component effectively integrates the information from small-sized medical images into SAM,enhancing its suitability for clinical deployment.The proposed model demonstrates an average enhancement ranging from 0.0387 to 0.3191 across six objective evaluation metrics on BUSI,DDTI,and TN3K datasets compared to eight other representative image segmentation models.This significantly enhances the performance of the SAM on medical images,providing clinicians with a powerful tool in clinical diagnosis.
文摘The use of AI technologies in remote sensing(RS)tasks has been the focus of many individuals in both the professional and academic domains.Having more accessible interfaces and tools that allow people of little or no experience to intuitively interact with RS data of multiple formats is a potential provided by this integration.However,the use of AI and AI agents to help automate RS-related tasks is still in its infancy stage,with some frameworks and interfaces built on top of well-known vision language models(VLM)such as GPT-4,segment anything model(SAM),and grounding DINO.These tools do promise and draw guidelines on the potentials and limitations of existing solutions concerning the use of said models.In this work,the state of the art AI foundation models(FM)are reviewed and used in a multi-modal manner to ingest RS imagery input and perform zero-shot object detection using natural language.The natural language input is then used to define the classes or labels the model should look for,then,both inputs are fed to the pipeline.The pipeline presented in this work makes up for the shortcomings of the general knowledge FMs by stacking pre-processing and post-processing applications on top of the FMs;these applications include tiling to produce uniform patches of the original image for faster detection,outlier rejection of redundant bounding boxes using statistical and machine learning methods.The pipeline was tested with UAV,aerial and satellite images taken over multiple areas.The accuracy for the semantic segmentation showed improvement from the original 64%to approximately 80%-99%by utilizing the pipeline and techniques proposed in this work.GitHub Repository:MohanadDiab/LangRS.
基金Natural Science Foundation of Zhejiang Province,Grant/Award Number:LY23F020025Science and Technology Commissioner Program of Huzhou,Grant/Award Number:2023GZ42Sichuan Provincial Science and Technology Support Program,Grant/Award Numbers:2023ZHCG0005,2023ZHCG0008。
文摘Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset.Initially,data augmentation mainly involved some simple transformations of images.Later,in order to increase the diversity and complexity of data,more advanced methods appeared and evolved to sophisticated generative models.However,these methods required a mass of computation of training or searching.In this paper,a novel training-free method that utilises the Pre-Trained Segment Anything Model(SAM)model as a data augmentation tool(PTSAM-DA)is proposed to generate the augmented annotations for images.Without the need for training,it obtains prompt boxes from the original annotations and then feeds the boxes to the pre-trained SAM to generate diverse and improved annotations.In this way,annotations are augmented more ingenious than simple manipulations without incurring huge computation for training a data augmentation model.Multiple comparative experiments on three datasets are conducted,including an in-house dataset,ADE20K and COCO2017.On this in-house dataset,namely Agricultural Plot Segmentation Dataset,maximum improvements of 3.77%and 8.92%are gained in two mainstream metrics,mIoU and mAcc,respectively.Consequently,large vision models like SAM are proven to be promising not only in image segmentation but also in data augmentation.
基金Supported by the National Key Research and Development Program of China(2024YFB3311703)National Natural Science Foundation of China(61932003)Beijing Science and Technology Plan Project(Z221100006322003).
文摘In this paper,we introduce an innovative method for computer-aided design(CAD)segmentation by concatenating meshes and CAD models.Many previous CAD segmentation methods have achieved impressive performance using single representations,such as meshes,CAD,and point clouds.However,existing methods cannot effectively combine different three-dimensional model types for the direct conversion,alignment,and integrity maintenance of geometric and topological information.Hence,we propose an integration approach that combines the geometric accuracy of CAD data with the flexibility of mesh representations,as well as introduce a unique hybrid representation that combines CAD and mesh models to enhance segmentation accuracy.To combine these two model types,our hybrid system utilizes advanced-neural-network techniques to convert CAD models into mesh models.For complex CAD models,model segmentation is crucial for model retrieval and reuse.In partial retrieval,it aims to segment a complex CAD model into several simple components.The first component of our hybrid system involves advanced mesh-labeling algorithms that harness the digitization of CAD properties to mesh models.The second component integrates labelled face features for CAD segmentation by leveraging the abundant multisemantic information embedded in CAD models.This combination of mesh and CAD not only refines the accuracy of boundary delineation but also provides a comprehensive understanding of the underlying object semantics.This study uses the Fusion 360 Gallery dataset.Experimental results indicate that our hybrid method can segment these models with higher accuracy than other methods that use single representations.
基金supported by the National Natural Science Foundation of China(Nos.81974355 and 82172524)Key Research and Development Program of Hubei Province(No.2021BEA161)+2 种基金National Innovation Platform Development Program(No.2020021105012440)Open Project Funding of the Hubei Key Laboratory of Big Data Intelligent Analysis and Application,Hubei University(No.2024BDIAA03)Free Innovation Preliminary Research Fund of Wuhan Union Hospital(No.2024XHYN047).
文摘Objective This study aimed to explore a novel method that integrates the segmentation guidance classification and the dif-fusion model augmentation to realize the automatic classification for tibial plateau fractures(TPFs).Methods YOLOv8n-cls was used to construct a baseline model on the data of 3781 patients from the Orthopedic Trauma Center of Wuhan Union Hospital.Additionally,a segmentation-guided classification approach was proposed.To enhance the dataset,a diffusion model was further demonstrated for data augmentation.Results The novel method that integrated the segmentation-guided classification and diffusion model augmentation sig-nificantly improved the accuracy and robustness of fracture classification.The average accuracy of classification for TPFs rose from 0.844 to 0.896.The comprehensive performance of the dual-stream model was also significantly enhanced after many rounds of training,with both the macro-area under the curve(AUC)and the micro-AUC increasing from 0.94 to 0.97.By utilizing diffusion model augmentation and segmentation map integration,the model demonstrated superior efficacy in identifying SchatzkerⅠ,achieving an accuracy of 0.880.It yielded an accuracy of 0.898 for SchatzkerⅡandⅢand 0.913 for SchatzkerⅣ;for SchatzkerⅤandⅥ,the accuracy was 0.887;and for intercondylar ridge fracture,the accuracy was 0.923.Conclusion The dual-stream attention-based classification network,which has been verified by many experiments,exhibited great potential in predicting the classification of TPFs.This method facilitates automatic TPF assessment and may assist surgeons in the rapid formulation of surgical plans.
基金supported by the National Natural Science Foundation of China(51805078)Project of National Key Laboratory of Advanced Casting Technologies(CAT2023-002)the 111 Project(B16009).
文摘Segment Anything Model(SAM)is a cutting-edge model that has shown impressive performance in general object segmentation.The birth of the segment anything is a groundbreaking step towards creating a universal intelligent model.Due to its superior performance in general object segmentation,it quickly gained attention and interest.This makes SAM particularly attractive in industrial surface defect segmentation,especially for complex industrial scenes with limited training data.However,its segmentation ability for specific industrial scenes remains unknown.Therefore,in this work,we select three representative and complex industrial surface defect detection scenarios,namely strip steel surface defects,tile surface defects,and rail surface defects,to evaluate the segmentation performance of SAM.Our results show that although SAM has great potential in general object segmentation,it cannot achieve satisfactory performance in complex industrial scenes.Our test results are available at:https://github.com/VDT-2048/SAM-IS.
文摘Computed Tomography(CT)is a commonly used technology in Printed Circuit Boards(PCB)non-destructive testing,and element segmentation of CT images is a key subsequent step.With the development of deep learning,researchers began to exploit the“pre-training and fine-tuning”training process for multi-element segmentation,reducing the time spent on manual annotation.However,the existing element segmentation model only focuses on the overall accuracy at the pixel level,ignoring whether the element connectivity relationship can be correctly identified.To this end,this paper proposes a PCB CT image element segmentation model optimizing the semantic perception of connectivity relationship(OSPC-seg).The overall training process adopts a“pre-training and fine-tuning”training process.A loss function that optimizes the semantic perception of circuit connectivity relationship(OSPC Loss)is designed from the aspect of alleviating the class imbalance problem and improving the correct connectivity rate.Also,the correct connectivity rate index(CCR)is proposed to evaluate the model’s connectivity relationship recognition capabilities.Experiments show that mIoU and CCR of OSPC-seg on our datasets are 90.1%and 97.0%,improved by 1.5%and 1.6%respectively compared with the baseline model.From visualization results,it can be seen that the segmentation performance of connection positions is significantly improved,which also demonstrates the effectiveness of OSPC-seg.
基金sponsored by Guangdong Basic and Applied Basic Research Foundation under Grant No.2021A1515110680Guangzhou Basic and Applied Basic Research under Grant No.202102020340.
文摘In this paper,we consider the Chan–Vese(C-V)model for image segmentation and obtain its numerical solution accurately and efficiently.For this purpose,we present a local radial basis function method based on a Gaussian kernel(GA-LRBF)for spatial discretization.Compared to the standard radial basis functionmethod,this approach consumes less CPU time and maintains good stability because it uses only a small subset of points in the whole computational domain.Additionally,since the Gaussian function has the property of dimensional separation,the GA-LRBF method is suitable for dealing with isotropic images.Finally,a numerical scheme that couples GA-LRBF with the fourth-order Runge–Kutta method is applied to the C-V model,and a comparison of some numerical results demonstrates that this scheme achieves much more reliable image segmentation.
文摘语义分割技术能够对复杂、多元的场景实现细粒度理解,是促进无人系统高效、智能工作的关键技术之一.大规模无监督语义分割旨在从大规模未标记图像中学习语义分割能力.然而,现有方法由于自学习伪标签存在类别混淆和形状表示欠佳的问题,导致最终分割精度较低.为此,本文提出一种伪标签去噪和SAM优化(Pseudo-label Denoising and SAM Optimization,PDSO)方法以解决大规模无监督语义分割问题.本文设计了一种基于去噪的特征微调模块,在基于小损失准则从大规模数据集中筛选出具有干净图像级伪标签的潜在样本后,利用这些干净样本对预训练的主干网络进行微调,使网络获得更稳健的类别表示.为了进一步减少伪标签中的类别噪声,设计了一种基于聚类的样本去噪模块,根据类别占比和样本与聚类中心之间的距离来去除干扰聚类任务的噪声样本,从而提升聚类性能.本文还设计了一种SAM提示优化模块,根据聚类距离识别出图像中的活跃类别,以过滤噪声目标,并将点和框作为SAM的目标提示信息,生成预期的目标掩膜以细化伪标签中目标的边缘.实验结果表明,在大规模语义分割数据集ImageNet-S_(50)、ImageNet-S_(300)和ImageNet-S_(919)的测试集上,本文方法在平均交并比指标上分别达到了45.0%、26.6%和14.5%,显著提高了分割目标的类别准确率和边缘精度.
文摘Brain tumors present significant challenges in medical diagnosis and treatment,where early detection is crucial for reducing morbidity and mortality rates.This research introduces a novel deep learning model,the Progressive Layered U-Net(PLU-Net),designed to improve brain tumor segmentation accuracy from Magnetic Resonance Imaging(MRI)scans.The PLU-Net extends the standard U-Net architecture by incorporating progressive layering,attention mechanisms,and multi-scale data augmentation.The progressive layering involves a cascaded structure that refines segmentation masks across multiple stages,allowing the model to capture features at different scales and resolutions.Attention gates within the convolutional layers selectively focus on relevant features while suppressing irrelevant ones,enhancing the model's ability to delineate tumor boundaries.Additionally,multi-scale data augmentation techniques increase the diversity of training data and boost the model's generalization capabilities.Evaluated on the BraTS 2021 dataset,the PLU-Net achieved state-of-the-art performance with a dice coefficient of 0.91,specificity of 0.92,sensitivity of 0.89,Hausdorff95 of 2.5,outperforming other modified U-Net architectures in segmentation accuracy.These results underscore the effectiveness of the PLU-Net in improving brain tumor segmentation from MRI scans,supporting clinicians in early diagnosis,treatment planning,and the development of new therapies.