期刊文献+
共找到18,923篇文章
< 1 2 250 >
每页显示 20 50 100
Two-dimensional cross entropy multi-threshold image segmentation based on improved BBO algorithm 被引量:2
1
作者 LI Wei HU Xiao-hui WANG Hong-chuang 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2018年第1期42-49,共8页
In order to improve the global search ability of biogeography-based optimization(BBO)algorithm in multi-threshold image segmentation,a multi-threshold image segmentation based on improved BBO algorithm is proposed.Whe... In order to improve the global search ability of biogeography-based optimization(BBO)algorithm in multi-threshold image segmentation,a multi-threshold image segmentation based on improved BBO algorithm is proposed.When using BBO algorithm to optimize threshold,firstly,the elitist selection operator is used to retain the optimal set of solutions.Secondly,a migration strategy based on fusion of good solution and pending solution is introduced to reduce premature convergence and invalid migration of traditional migration operations.Thirdly,to reduce the blindness of traditional mutation operations,a mutation operation through binary computation is created.Then,it is applied to the multi-threshold image segmentation of two-dimensional cross entropy.Finally,this method is used to segment the typical image and compared with two-dimensional multi-threshold segmentation based on particle swarm optimization algorithm and the two-dimensional multi-threshold image segmentation based on standard BBO algorithm.The experimental results show that the method has good convergence stability,it can effectively shorten the time of iteration,and the optimization performance is better than the standard BBO algorithm. 展开更多
关键词 two-dimensional cross entropy biogeography-based optimization(BBO)algorithm multi-threshold image segmentation
在线阅读 下载PDF
Research on Kapur multi-threshold image segmentation based on improved sparrow search algorithm
2
作者 Wu Jin Feng Haoran +1 位作者 Chong Gege Xiong Hao 《The Journal of China Universities of Posts and Telecommunications》 2025年第2期31-43,共13页
Multilevel threshold image segmentation divides an image into several regions with distinct characteristics.While effective,its computational complexity increases exponentially with the number of thresholds,highlighti... Multilevel threshold image segmentation divides an image into several regions with distinct characteristics.While effective,its computational complexity increases exponentially with the number of thresholds,highlighting the need for more efficient and stable methods.An improved sparrow search algorithm(ISSA)that combines multiple strategies to address the dependency on the initial population and solution accuracy issues in the basic sparrow search algorithm(SSA)was proposed in this paper.ISSA leverages circle chaotic mapping to enhance population diversity,a tangent flight operator to improve search diversity,and a triangular random walk to perturb the optimal solution,thereby enhancing global search capability and avoiding local optima.Performance evaluations on 16 benchmark functions demonstrate that ISSA surpasses the gray wolf optimizer(GWO),whale optimization algorithm(WOA),rat swarm optimizer(RSO),moth-flame optimization(MFO),and SSA in terms of search speed,accuracy,and robustness.When applied to multilevel threshold image segmentation,ISSA excels in Kapur's maximum entropy,peak signal-to-noise ratio(PSNR),structural similarity(SSIM),and feature similarity(FSIM),highlighting its significant research value and application potential in the field of image segmentation. 展开更多
关键词 image segmentation sparrow search algorithm(SSA) multi-threshold Kapur's maximum entropy
原文传递
Stochastic Augmented-Based Dual-Teaching for Semi-Supervised Medical Image Segmentation
3
作者 Hengyang Liu Yang Yuan +2 位作者 Pengcheng Ren Chengyun Song Fen Luo 《Computers, Materials & Continua》 SCIE EI 2025年第1期543-560,共18页
Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)t... Existing semi-supervisedmedical image segmentation algorithms use copy-paste data augmentation to correct the labeled-unlabeled data distribution mismatch.However,current copy-paste methods have three limitations:(1)training the model solely with copy-paste mixed pictures from labeled and unlabeled input loses a lot of labeled information;(2)low-quality pseudo-labels can cause confirmation bias in pseudo-supervised learning on unlabeled data;(3)the segmentation performance in low-contrast and local regions is less than optimal.We design a Stochastic Augmentation-Based Dual-Teaching Auxiliary Training Strategy(SADT),which enhances feature diversity and learns high-quality features to overcome these problems.To be more precise,SADT trains the Student Network by using pseudo-label-based training from Teacher Network 1 and supervised learning with labeled data,which prevents the loss of rare labeled data.We introduce a bi-directional copy-pastemask with progressive high-entropy filtering to reduce data distribution disparities and mitigate confirmation bias in pseudo-supervision.For the mixed images,Deep-Shallow Spatial Contrastive Learning(DSSCL)is proposed in the feature spaces of Teacher Network 2 and the Student Network to improve the segmentation capabilities in low-contrast and local areas.In this procedure,the features retrieved by the Student Network are subjected to a random feature perturbation technique.On two openly available datasets,extensive trials show that our proposed SADT performs much better than the state-ofthe-art semi-supervised medical segmentation techniques.Using only 10%of the labeled data for training,SADT was able to acquire a Dice score of 90.10%on the ACDC(Automatic Cardiac Diagnosis Challenge)dataset. 展开更多
关键词 SEMI-SUPERVISED medical image segmentation contrastive learning stochastic augmented
在线阅读 下载PDF
Semantic Segmentation of Lumbar Vertebrae Using Meijering U-Net(MU-Net)on Spine Magnetic Resonance Images
4
作者 Lakshmi S V V Shiloah Elizabeth Darmanayagam Sunil Retmin Raj Cyril 《Computer Modeling in Engineering & Sciences》 SCIE EI 2025年第1期733-757,共25页
Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the s... Lower back pain is one of the most common medical problems in the world and it is experienced by a huge percentage of people everywhere.Due to its ability to produce a detailed view of the soft tissues,including the spinal cord,nerves,intervertebral discs,and vertebrae,Magnetic Resonance Imaging is thought to be the most effective method for imaging the spine.The semantic segmentation of vertebrae plays a major role in the diagnostic process of lumbar diseases.It is difficult to semantically partition the vertebrae in Magnetic Resonance Images from the surrounding variety of tissues,including muscles,ligaments,and intervertebral discs.U-Net is a powerful deep-learning architecture to handle the challenges of medical image analysis tasks and achieves high segmentation accuracy.This work proposes a modified U-Net architecture namely MU-Net,consisting of the Meijering convolutional layer that incorporates the Meijering filter to perform the semantic segmentation of lumbar vertebrae L1 to L5 and sacral vertebra S1.Pseudo-colour mask images were generated and used as ground truth for training the model.The work has been carried out on 1312 images expanded from T1-weighted mid-sagittal MRI images of 515 patients in the Lumbar Spine MRI Dataset publicly available from Mendeley Data.The proposed MU-Net model for the semantic segmentation of the lumbar vertebrae gives better performance with 98.79%of pixel accuracy(PA),98.66%of dice similarity coefficient(DSC),97.36%of Jaccard coefficient,and 92.55%mean Intersection over Union(mean IoU)metrics using the mentioned dataset. 展开更多
关键词 Computer aided diagnosis(CAD) magnetic resonance imaging(MRI) semantic segmentation lumbar vertebrae deep learning U-Net model
在线阅读 下载PDF
EACNet:Ensemble adversarial co-training neural network for handling missing modalities in MRI images for brain tumor segmentation
5
作者 RAMADHAN Amran Juma CHEN Jing PENG Junlan 《Journal of Measurement Science and Instrumentation》 2025年第1期11-25,共15页
Brain tumor segmentation is critical in clinical diagnosis and treatment planning.Existing methods for brain tumor segmentation with missing modalities often struggle when dealing with multiple missing modalities,a co... Brain tumor segmentation is critical in clinical diagnosis and treatment planning.Existing methods for brain tumor segmentation with missing modalities often struggle when dealing with multiple missing modalities,a common scenario in real-world clinical settings.These methods primarily focus on handling a single missing modality at a time,making them insufficiently robust for the additional complexity encountered with incomplete data containing various missing modality combinations.Additionally,most existing methods rely on single models,which may limit their performance and increase the risk of overfitting the training data.This work proposes a novel method called the ensemble adversarial co-training neural network(EACNet)for accurate brain tumor segmentation from multi-modal magnetic resonance imaging(MRI)scans with multiple missing modalities.The proposed method consists of three key modules:the ensemble of pre-trained models,which captures diverse feature representations from the MRI data by employing an ensemble of pre-trained models;adversarial learning,which leverages a competitive training approach involving two models;a generator model,which creates realistic missing data,while sub-networks acting as discriminators learn to distinguish real data from the generated“fake”data.Co-training framework utilizes the information extracted by the multimodal path(trained on complete scans)to guide the learning process in the path handling missing modalities.The model potentially compensates for missing information through co-training interactions by exploiting the relationships between available modalities and the tumor segmentation task.EACNet was evaluated on the BraTS2018 and BraTS2020 challenge datasets and achieved state-of-the-art and competitive performance respectively.Notably,the segmentation results for the whole tumor(WT)dice similarity coefficient(DSC)reached 89.27%,surpassing the performance of existing methods.The analysis suggests that the ensemble approach offers potential benefits,and the adversarial co-training contributes to the increased robustness and accuracy of EACNet for brain tumor segmentation of MRI scans with missing modalities.The experimental results show that EACNet has promising results for the task of brain tumor segmentation of MRI scans with missing modalities and is a better candidate for real-world clinical applications. 展开更多
关键词 deep learning magnetic resonance imaging(MRI) medical image analysis semantic segmentation segmentation accuracy image synthesis
在线阅读 下载PDF
U-Net-Based Medical Image Segmentation:A Comprehensive Analysis and Performance Review
6
作者 Aliyu Abdulfatah Zhang Sheng Yirga Eyasu Tenawerk 《Journal of Electronic Research and Application》 2025年第1期202-208,共7页
Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Im... Medical image segmentation has become a cornerstone for many healthcare applications,allowing for the automated extraction of critical information from images such as Computed Tomography(CT)scans,Magnetic Resonance Imaging(MRIs),and X-rays.The introduction of U-Net in 2015 has significantly advanced segmentation capabilities,especially for small datasets commonly found in medical imaging.Since then,various modifications to the original U-Net architecture have been proposed to enhance segmentation accuracy and tackle challenges like class imbalance,data scarcity,and multi-modal image processing.This paper provides a detailed review and comparison of several U-Net-based architectures,focusing on their effectiveness in medical image segmentation tasks.We evaluate performance metrics such as Dice Similarity Coefficient(DSC)and Intersection over Union(IoU)across different U-Net variants including HmsU-Net,CrossU-Net,mResU-Net,and others.Our results indicate that architectural enhancements such as transformers,attention mechanisms,and residual connections improve segmentation performance across diverse medical imaging applications,including tumor detection,organ segmentation,and lesion identification.The study also identifies current challenges in the field,including data variability,limited dataset sizes,and issues with class imbalance.Based on these findings,the paper suggests potential future directions for improving the robustness and clinical applicability of U-Net-based models in medical image segmentation. 展开更多
关键词 U-Net architecture Medical image segmentation DSC IOU Transformer-based segmentation
在线阅读 下载PDF
Pre-trained SAM as data augmentation for image segmentation
7
作者 Junjun Wu Yunbo Rao +1 位作者 Shaoning Zeng Bob Zhang 《CAAI Transactions on Intelligence Technology》 2025年第1期268-282,共15页
Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset.Initially,data augmentation mainly involved some simple transformations of images.Later,in ord... Data augmentation plays an important role in training deep neural model by expanding the size and diversity of the dataset.Initially,data augmentation mainly involved some simple transformations of images.Later,in order to increase the diversity and complexity of data,more advanced methods appeared and evolved to sophisticated generative models.However,these methods required a mass of computation of training or searching.In this paper,a novel training-free method that utilises the Pre-Trained Segment Anything Model(SAM)model as a data augmentation tool(PTSAM-DA)is proposed to generate the augmented annotations for images.Without the need for training,it obtains prompt boxes from the original annotations and then feeds the boxes to the pre-trained SAM to generate diverse and improved annotations.In this way,annotations are augmented more ingenious than simple manipulations without incurring huge computation for training a data augmentation model.Multiple comparative experiments on three datasets are conducted,including an in-house dataset,ADE20K and COCO2017.On this in-house dataset,namely Agricultural Plot Segmentation Dataset,maximum improvements of 3.77%and 8.92%are gained in two mainstream metrics,mIoU and mAcc,respectively.Consequently,large vision models like SAM are proven to be promising not only in image segmentation but also in data augmentation. 展开更多
关键词 data augmentation image segmentation large model segment anything model
在线阅读 下载PDF
BiCLIP-nnFormer:A Virtual Multimodal Instrument for Efficient and Accurate Medical Image Segmentation
8
作者 Wang Bo Yue Yan +5 位作者 Mengyuan Xu Yuqun Yang Xu Tang Kechen Shu Jingyang Ai Zheng You 《Instrumentation》 2025年第2期1-13,共13页
Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a c... Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a crucial topic of research.With advances in deep learning,researchers have developed numerous methods that combine Transformers and convolutional neural networks(CNNs)to create highly accurate models for medical image segmentation.However,efforts to further enhance accuracy by developing larger and more complex models or training with more extensive datasets,significantly increase computational resource consumption.To address this problem,we propose BiCLIP-nnFormer(the prefix"Bi"refers to the use of two distinct CLIP models),a virtual multimodal instrument that leverages CLIP models to enhance the segmentation performance of a medical segmentation model nnFormer.Since two CLIP models(PMC-CLIP and CoCa-CLIP)are pre-trained on large datasets,they do not require additional training,thus conserving computation resources.These models are used offline to extract image and text embeddings from medical images.These embeddings are then processed by the proposed 3D CLIP adapter,which adapts the CLIP knowledge for segmentation tasks by fine-tuning.Finally,the adapted embeddings are fused with feature maps extracted from the nnFormer encoder for generating predicted masks.This process enriches the representation capabilities of the feature maps by integrating global multimodal information,leading to more precise segmentation predictions.We demonstrate the superiority of BiCLIP-nnFormer and the effectiveness of using CLIP models to enhance nnFormer through experiments on two public datasets,namely the Synapse multi-organ segmentation dataset(Synapse)and the Automatic Cardiac Diagnosis Challenge dataset(ACDC),as well as a self-annotated lung multi-category segmentation dataset(LMCS). 展开更多
关键词 medical image analysis image segmentation CLIP feature fusion deep learning
原文传递
EILnet: An intelligent model for the segmentation of multiple fracture types in karst carbonate reservoirs using electrical image logs
9
作者 Zhuolin Li Guoyin Zhang +4 位作者 Xiangbo Zhang Xin Zhang Yuchen Long Yanan Sun Chengyan Lin 《Natural Gas Industry B》 2025年第2期158-173,共16页
Karst fractures serve as crucial seepage channels and storage spaces for carbonate natural gas reservoirs,and electrical image logs are vital data for visualizing and characterizing such fractures.However,the conventi... Karst fractures serve as crucial seepage channels and storage spaces for carbonate natural gas reservoirs,and electrical image logs are vital data for visualizing and characterizing such fractures.However,the conventional approach of identifying fractures using electrical image logs predominantly relies on manual processes that are not only time-consuming but also highly subjective.In addition,the heterogeneity and strong dissolution tendency of karst carbonate reservoirs lead to complexity and variety in fracture geometry,which makes it difficult to accurately identify fractures.In this paper,the electrical image logs network(EILnet)da deep-learning-based intelligent semantic segmentation model with a selective attention mechanism and selective feature fusion moduledwas created to enable the intelligent identification and segmentation of different types of fractures through electrical logging images.Data from electrical image logs representing structural and induced fractures were first selected using the sliding window technique before image inpainting and data augmentation were implemented for these images to improve the generalizability of the model.Various image-processing tools,including the bilateral filter,Laplace operator,and Gaussian low-pass filter,were also applied to the electrical logging images to generate a multi-attribute dataset to help the model learn the semantic features of the fractures.The results demonstrated that the EILnet model outperforms mainstream deep-learning semantic segmentation models,such as Fully Convolutional Networks(FCN-8s),U-Net,and SegNet,for both the single-channel dataset and the multi-attribute dataset.The EILnet provided significant advantages for the single-channel dataset,and its mean intersection over union(MIoU)and pixel accuracy(PA)were 81.32%and 89.37%,respectively.In the case of the multi-attribute dataset,the identification capability of all models improved to varying degrees,with the EILnet achieving the highest MIoU and PA of 83.43%and 91.11%,respectively.Further,applying the EILnet model to various blind wells demonstrated its ability to provide reliable fracture identification,thereby indicating its promising potential applications. 展开更多
关键词 Karst fracture identification Deep learning Semantic segmentation Electrical image logs image processing
在线阅读 下载PDF
High-performance laser speckle contrast image vascular segmentation without delicate pseudo-label reliance
10
作者 Shenglan Yao Huiling Wu +8 位作者 Suzhong Fu Shuting Ling Kun Wang Hongqin Yang Yaqin He Xiaolan Ma Xiaofeng Ye Xiaofei Wen Qingliang Zhao 《Journal of Innovative Optical Health Sciences》 2025年第1期117-133,共17页
Laser speckle contrast imaging(LSCI)is a noninvasive,label-free technique that allows real-time investigation of the microcirculation situation of biological tissue.High-quality microvascular segmentation is critical ... Laser speckle contrast imaging(LSCI)is a noninvasive,label-free technique that allows real-time investigation of the microcirculation situation of biological tissue.High-quality microvascular segmentation is critical for analyzing and evaluating vascular morphology and blood flow dynamics.However,achieving high-quality vessel segmentation has always been a challenge due to the cost and complexity of label data acquisition and the irregular vascular morphology.In addition,supervised learning methods heavily rely on high-quality labels for accurate segmentation results,which often necessitate extensive labeling efforts.Here,we propose a novel approach LSWDP for high-performance real-time vessel segmentation that utilizes low-quality pseudo-labels for nonmatched training without relying on a substantial number of intricate labels and image pairing.Furthermore,we demonstrate that our method is more robust and effective in mitigating performance degradation than traditional segmentation approaches on diverse style data sets,even when confronted with unfamiliar data.Importantly,the dice similarity coefficient exceeded 85%in a rat experiment.Our study has the potential to efficiently segment and evaluate blood vessels in both normal and disease situations.This would greatly benefit future research in life and medicine. 展开更多
关键词 Biomedical imaging laser speckle contrast imaging vessel segmentation weakly supervised learning MICROCIRCULATION
原文传递
UltraSegNet:A Hybrid Deep Learning Framework for Enhanced Breast Cancer Segmentation and Classification on Ultrasound Images
11
作者 Suhaila Abuowaida Hamza Abu Owida +3 位作者 Deema Mohammed Alsekait Nawaf Alshdaifat Diaa Salama Abd Elminaam Mohammad Alshinwan 《Computers, Materials & Continua》 2025年第5期3303-3333,共31页
Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addres... Segmenting a breast ultrasound image is still challenging due to the presence of speckle noise,dependency on the operator,and the variation of image quality.This paper presents the UltraSegNet architecture that addresses these challenges through three key technical innovations:This work adds three things:(1)a changed ResNet-50 backbone with sequential 3×3 convolutions to keep fine anatomical details that are needed for finding lesion boundaries;(2)a computationally efficient regional attention mechanism that works on high-resolution features without using a transformer’s extra memory;and(3)an adaptive feature fusion strategy that changes local and global featuresbasedonhowthe image isbeing used.Extensive evaluation on two distinct datasets demonstrates UltraSegNet’s superior performance:On the BUSI dataset,it obtains a precision of 0.915,a recall of 0.908,and an F1 score of 0.911.In the UDAIT dataset,it achieves robust performance across the board,with a precision of 0.901 and recall of 0.894.Importantly,these improvements are achieved at clinically feasible computation times,taking 235 ms per image on standard GPU hardware.Notably,UltraSegNet does amazingly well on difficult small lesions(less than 10 mm),achieving a detection accuracy of 0.891.This is a huge improvement over traditional methods that have a hard time with small-scale features,as standard models can only achieve 0.63–0.71 accuracy.This improvement in small lesion detection is particularly crucial for early-stage breast cancer identification.Results from this work demonstrate that UltraSegNet can be practically deployable in clinical workflows to improve breast cancer screening accuracy. 展开更多
关键词 Breast cancer ultrasound image segmentation CLASSIFICATION deep learning
在线阅读 下载PDF
A Novel Data-Annotated Label Collection and Deep-Learning Based Medical Image Segmentation in Reversible Data Hiding Domain
12
作者 Lord Amoah Jinwei Wang Bernard-Marie Onzo 《Computer Modeling in Engineering & Sciences》 2025年第5期1635-1660,共26页
Medical image segmentation,i.e.,labeling structures of interest in medical images,is crucial for disease diagnosis and treatment in radiology.In reversible data hiding in medical images(RDHMI),segmentation consists of... Medical image segmentation,i.e.,labeling structures of interest in medical images,is crucial for disease diagnosis and treatment in radiology.In reversible data hiding in medical images(RDHMI),segmentation consists of only two regions:the focal and nonfocal regions.The focal region mainly contains information for diagnosis,while the nonfocal region serves as the monochrome background.The current traditional segmentation methods utilized in RDHMI are inaccurate for complex medical images,and manual segmentation is time-consuming,poorly reproducible,and operator-dependent.Implementing state-of-the-art deep learning(DL)models will facilitate key benefits,but the lack of domain-specific labels for existing medical datasets makes it impossible.To address this problem,this study provides labels of existing medical datasets based on a hybrid segmentation approach to facilitate the implementation of DL segmentation models in this domain.First,an initial segmentation based on a 33 kernel is performed to analyze×identified contour pixels before classifying pixels into focal and nonfocal regions.Then,several human expert raters evaluate and classify the generated labels into accurate and inaccurate labels.The inaccurate labels undergo manual segmentation by medical practitioners and are scored based on a hierarchical voting scheme before being assigned to the proposed dataset.To ensure reliability and integrity in the proposed dataset,we evaluate the accurate automated labels with manually segmented labels by medical practitioners using five assessment metrics:dice coefficient,Jaccard index,precision,recall,and accuracy.The experimental results show labels in the proposed dataset are consistent with the subjective judgment of human experts,with an average accuracy score of 94%and dice coefficient scores between 90%-99%.The study further proposes a ResNet-UNet with concatenated spatial and channel squeeze and excitation(scSE)architecture for semantic segmentation to validate and illustrate the usefulness of the proposed dataset.The results demonstrate the superior performance of the proposed architecture in accurately separating the focal and nonfocal regions compared to state-of-the-art architectures.Dataset information is released under the following URL:https://www.kaggle.com/lordamoah/datasets(accessed on 31 March 2025). 展开更多
关键词 Reversible data hiding medical image segmentation medical image dataset deep learning
在线阅读 下载PDF
DGFE-Mamba:Mamba-Based 2D Image Segmentation Network
13
作者 Junding Sun Kaixin Chen +4 位作者 Shuihua Wang Yudong Zhang Zhaozhao Xu Xiaosheng Wu Chaosheng Tang 《Journal of Bionic Engineering》 2025年第4期2135-2150,共16页
In the field of medical image processing,combining global and local relationship modeling constitutes an effective strategy for precise segmentation.Prior research has established the validity of Convolutional Neural ... In the field of medical image processing,combining global and local relationship modeling constitutes an effective strategy for precise segmentation.Prior research has established the validity of Convolutional Neural Networks(CNN)in modeling local relationships.Conversely,Transformers have demonstrated their capability to effectively capture global contextual information.However,when utilized to address CNNs’limitations in modeling global relationships,Transformers are hindered by substantial computational complexity.To address this issue,we introduce Mamba,a State-Space Model(SSM)that exhibits exceptional proficiency in modeling long-range dependencies in sequential data.Given Mamba’s demonstrated potential in 2D medical image segmentation in previous studies,we have designed a Dual-encoder Global-local Feature Extraction Network based on Mamba,termed DGFE-Mamba,to accurately capture and fuse long-range dependencies and local dependencies within multi-scale features.Compared to Transformer-based methods,the DGFE-Mamba model excels in comprehensive feature modeling and demonstrates significantly improved segmentation accuracy.To validate the effectiveness and practicality of DGFE-Mamba,we conducted tests on the Automatic Cardiac Diagnosis Challenge(ACDC)dataset,the Synapse multi-organ CT abdominal segmentation dataset,and the Colorectal Cancer Clinic(CVC-ClinicDB)dataset.The results showed that DGFE-Mamba achieved Dice coefficients of 92.20,83.67,and 94.13,respectively.These findings comprehensively validate the effectiveness and practicality of the proposed DGFE-Mamba architecture. 展开更多
关键词 Medical image segmentation Mamba CNN Attention Mechanism
在线阅读 下载PDF
WaveSeg-UNet model for overlapped nuclei segmentation from multi-organ histopathology images
14
作者 Hameed Ullah Khan Basit Raza +1 位作者 Muhammad Asad Iqbal Khan Muhammad Faheem 《CAAI Transactions on Intelligence Technology》 2025年第1期253-267,共15页
Nuclei segmentation is a challenging task in histopathology images.It is challenging due to the small size of objects,low contrast,touching boundaries,and complex structure of nuclei.Their segmentation and counting pl... Nuclei segmentation is a challenging task in histopathology images.It is challenging due to the small size of objects,low contrast,touching boundaries,and complex structure of nuclei.Their segmentation and counting play an important role in cancer identification and its grading.In this study,WaveSeg-UNet,a lightweight model,is introduced to segment cancerous nuclei having touching boundaries.Residual blocks are used for feature extraction.Only one feature extractor block is used in each level of the encoder and decoder.Normally,images degrade quality and lose important information during down-sampling.To overcome this loss,discrete wavelet transform(DWT)alongside maxpooling is used in the down-sampling process.Inverse DWT is used to regenerate original images during up-sampling.In the bottleneck of the proposed model,atrous spatial channel pyramid pooling(ASCPP)is used to extract effective high-level features.The ASCPP is the modified pyramid pooling having atrous layers to increase the area of the receptive field.Spatial and channel-based attention are used to focus on the location and class of the identified objects.Finally,watershed transform is used as a post processing technique to identify and refine touching boundaries of nuclei.Nuclei are identified and counted to facilitate pathologists.The same domain of transfer learning is used to retrain the model for domain adaptability.Results of the proposed model are compared with state-of-the-art models,and it outperformed the existing studies. 展开更多
关键词 deep learning histopathology images machine learning nuclei segmentation U-Net
在线阅读 下载PDF
A quantitative evaluation method of laser treatment efficacy for pigmentary dermatosis based on image segmentation technology
15
作者 Haopu Jian Qi Chen +3 位作者 Youjun Yu Cheng Wang Peiru Wang Xiuli Wang 《Journal of Innovative Optical Health Sciences》 2025年第4期89-99,共11页
A growing number of skin laser treatments have rapidly evolved and increased their role in the field of dermatology,laser treatment is considered to be used for a variety of pigmentary dermatosis as well as aesthetic ... A growing number of skin laser treatments have rapidly evolved and increased their role in the field of dermatology,laser treatment is considered to be used for a variety of pigmentary dermatosis as well as aesthetic problems.The standardized assessment of laser treatment efficacy is crucial for the interpretation and comparison of studies related to laser treatment of skin disorders.In this study,we propose an evaluation method to quantitatively assess laser treatment efficacy based on the image segmentation technology.A tattoo model of Sprague Dawley(SD)rats was established and treated by picosecond laser treatments at varying energy levels.Images of the tattoo models were captured before and after laser treatment,and feature extraction was conducted to quantify the tattooed area and pigment gradation.Subsequently,the clearance rate,which has been a standardized parameter,was calculated.The results indicate that the clearance rates obtained through this quantitative algorithm are comparable and exhibit smaller standard deviations compared with scale scores(4.59%versus 7.93%in the low-energy group,4.01%versus 9.05%in the medium-energy group,and 4.29%versus 10.23%in the high-energy group).This underscores the greater accuracy,objectivity,and reproducibility in assessing treatment responses.The quantitative evaluation of pigment removal holds promise for facilitating faster and more robust assessments in research and development.Additionally,it may enable the optimization of treatments tailored to individual patients,thereby contributing to more effective and personalized dermatological care. 展开更多
关键词 Pigmentary dermatosis picosecond laser image segmentation quantitative evaluation
原文传递
Med-ReLU: A Parameter-Free Hybrid Activation Function for Deep Artificial Neural Network Used in Medical Image Segmentation
16
作者 Nawaf Waqas Muhammad Islam +3 位作者 Muhammad Yahya Shabana Habib Mohammed Aloraini Sheroz Khan 《Computers, Materials & Continua》 2025年第8期3029-3051,共23页
Deep learning(DL),derived from the domain of Artificial Neural Networks(ANN),forms one of the most essential components of modern deep learning algorithms.DL segmentation models rely on layer-by-layer convolution-base... Deep learning(DL),derived from the domain of Artificial Neural Networks(ANN),forms one of the most essential components of modern deep learning algorithms.DL segmentation models rely on layer-by-layer convolution-based feature representation,guided by forward and backward propagation.Acritical aspect of this process is the selection of an appropriate activation function(AF)to ensure robustmodel learning.However,existing activation functions often fail to effectively address the vanishing gradient problem or are complicated by the need for manual parameter tuning.Most current research on activation function design focuses on classification tasks using natural image datasets such asMNIST,CIFAR-10,and CIFAR-100.To address this gap,this study proposesMed-ReLU,a novel activation function specifically designed for medical image segmentation.Med-ReLU prevents deep learning models fromsuffering dead neurons or vanishing gradient issues.It is a hybrid activation function that combines the properties of ReLU and Softsign.For positive inputs,Med-ReLU adopts the linear behavior of ReLU to avoid vanishing gradients,while for negative inputs,it exhibits the Softsign’s polynomial convergence,ensuring robust training and avoiding inactive neurons across the training set.The training performance and segmentation accuracy ofMed-ReLU have been thoroughly evaluated,demonstrating stable learning behavior and resistance to overfitting.It consistently outperforms state-of-the-art activation functions inmedical image segmentation tasks.Designed as a parameter-free function,Med-ReLU is simple to implement in complex deep learning architectures,and its effectiveness spans various neural network models and anomaly detection scenarios. 展开更多
关键词 Medical image segmentation U-Net deep learning models activation function
暂未订购
Positional Information is a Strong Supervision for Volumetric Medical Image Segmentation
17
作者 ZHAO Yinjie HOU Runping +5 位作者 ZENG Wanqin QIN Yulei SHEN Tianle XU Zhiyong FU Xiaolong SHEN Hongbin 《Journal of Shanghai Jiaotong university(Science)》 2025年第1期121-129,共9页
Medical image segmentation is a crucial preliminary step for a number of downstream diagnosis tasks.As deep convolutional neural networks successfully promote the development of computer vision,it is possible to make ... Medical image segmentation is a crucial preliminary step for a number of downstream diagnosis tasks.As deep convolutional neural networks successfully promote the development of computer vision,it is possible to make medical image segmentation a semi-automatic procedure by applying deep convolutional neural networks to finding the contours of regions of interest that are then revised by radiologists.However,supervised learning necessitates large annotated data,which are difficult to acquire especially for medical images.Self-supervised learning is able to take advantage of unlabeled data and provide good initialization to be finetuned for downstream tasks with limited annotations.Considering that most self-supervised learning especially contrastive learning methods are tailored to natural image classification and entail expensive GPU resources,we propose a novel and simple pretext-based self-supervised learning method that exploits the value of positional information in volumetric medical images.Specifically,we regard spatial coordinates as pseudo labels and pretrain the model by predicting positions of randomly sampled 2D slices in volumetric medical images.Experiments on four semantic segmentation datasets demonstrate the superiority of our method over other self-supervised learning methods in both semi-supervised learning and transfer learning settings.Codes are available at https://github.com/alienzyj/PPos. 展开更多
关键词 self-supervised learning medical image analysis semantic segmentation
原文传递
A medical image segmentation model based on SAM with an integrated local multi-scale feature encoder
18
作者 DI Jing ZHU Yunlong LIANG Chan 《Journal of Measurement Science and Instrumentation》 2025年第3期359-370,共12页
Despite its remarkable performance on natural images,the segment anything model(SAM)lacks domain-specific information in medical imaging.and faces the challenge of losing local multi-scale information in the encoding ... Despite its remarkable performance on natural images,the segment anything model(SAM)lacks domain-specific information in medical imaging.and faces the challenge of losing local multi-scale information in the encoding phase.This paper presents a medical image segmentation model based on SAM with a local multi-scale feature encoder(LMSFE-SAM)to address the issues above.Firstly,based on the SAM,a local multi-scale feature encoder is introduced to improve the representation of features within local receptive field,thereby supplying the Vision Transformer(ViT)branch in SAM with enriched local multi-scale contextual information.At the same time,a multiaxial Hadamard product module(MHPM)is incorporated into the local multi-scale feature encoder in a lightweight manner to reduce the quadratic complexity and noise interference.Subsequently,a cross-branch balancing adapter is designed to balance the local and global information between the local multi-scale feature encoder and the ViT encoder in SAM.Finally,to obtain smaller input image size and to mitigate overlapping in patch embeddings,the size of the input image is reduced from 1024×1024 pixels to 256×256 pixels,and a multidimensional information adaptation component is developed,which includes feature adapters,position adapters,and channel-spatial adapters.This component effectively integrates the information from small-sized medical images into SAM,enhancing its suitability for clinical deployment.The proposed model demonstrates an average enhancement ranging from 0.0387 to 0.3191 across six objective evaluation metrics on BUSI,DDTI,and TN3K datasets compared to eight other representative image segmentation models.This significantly enhances the performance of the SAM on medical images,providing clinicians with a powerful tool in clinical diagnosis. 展开更多
关键词 segment anything model(SAM) medical image segmentation ENCODER decoder multiaxial Hadamard product module(MHPM) cross-branch balancing adapter
在线阅读 下载PDF
DMHFR:Decoder with Multi-Head Feature Receptors for Tract Image Segmentation
19
作者 Jianuo Huang Bohan Lai +2 位作者 Weiye Qiu Caixu Xu Jie He 《Computers, Materials & Continua》 2025年第3期4841-4862,共22页
The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships ... The self-attention mechanism of Transformers,which captures long-range contextual information,has demonstrated significant potential in image segmentation.However,their ability to learn local,contextual relationships between pixels requires further improvement.Previous methods face challenges in efficiently managing multi-scale fea-tures of different granularities from the encoder backbone,leaving room for improvement in their global representation and feature extraction capabilities.To address these challenges,we propose a novel Decoder with Multi-Head Feature Receptors(DMHFR),which receives multi-scale features from the encoder backbone and organizes them into three feature groups with different granularities:coarse,fine-grained,and full set.These groups are subsequently processed by Multi-Head Feature Receptors(MHFRs)after feature capture and modeling operations.MHFRs include two Three-Head Feature Receptors(THFRs)and one Four-Head Feature Receptor(FHFR).Each group of features is passed through these MHFRs and then fed into axial transformers,which help the model capture long-range dependencies within the features.The three MHFRs produce three distinct feature outputs.The output from the FHFR serves as auxiliary auxiliary features in the prediction head,and the prediction output and their losses will eventually be aggregated.Experimental results show that the Transformer using DMHFR outperforms 15 state of the arts(SOTA)methods on five public datasets.Specifically,it achieved significant improvements in mean DICE scores over the classic Parallel Reverse Attention Network(PraNet)method,with gains of 4.1%,2.2%,1.4%,8.9%,and 16.3%on the CVC-ClinicDB,Kvasir-SEG,CVC-T,CVC-ColonDB,and ETIS-LaribPolypDB datasets,respectively. 展开更多
关键词 Medical image segmentation feature exploration feature aggregation deep learning multi-head feature receptor
在线阅读 下载PDF
Multi-Stage Hierarchical Feature Extraction for Efficient 3D Medical Image Segmentation
20
作者 Jion Kim Jayeon Kim Byeong-Seok Shin 《Computers, Materials & Continua》 2025年第6期5429-5443,共15页
Research has been conducted to reduce resource consumption in 3D medical image segmentation for diverse resource-constrained environments.However,decreasing the number of parameters to enhance computational efficiency... Research has been conducted to reduce resource consumption in 3D medical image segmentation for diverse resource-constrained environments.However,decreasing the number of parameters to enhance computational efficiency can also lead to performance degradation.Moreover,these methods face challenges in balancing global and local features,increasing the risk of errors in multi-scale segmentation.This issue is particularly pronounced when segmenting small and complex structures within the human body.To address this problem,we propose a multi-stage hierarchical architecture composed of a detector and a segmentor.The detector extracts regions of interest(ROIs)in a 3D image,while the segmentor performs segmentation in the extracted ROI.Removing unnecessary areas in the detector allows the segmentation to be performed on a more compact input.The segmentor is designed with multiple stages,where each stage utilizes different input sizes.It implements a stage-skippingmechanism that deactivates certain stages using the initial input size.This approach minimizes unnecessary computations on segmenting the essential regions to reduce computational overhead.The proposed framework preserves segmentation performance while reducing resource consumption,enabling segmentation even in resource-constrained environments. 展开更多
关键词 Volumetric segmentation 3D medical images computational resources
在线阅读 下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部