Zanthoxylum bungeanum Maxim,generally called prickly ash,is widely grown in China.Zanthoxylum rust is the main disease affecting the growth and quality of Zanthoxylum.Traditional method for recognizing the degree of i...Zanthoxylum bungeanum Maxim,generally called prickly ash,is widely grown in China.Zanthoxylum rust is the main disease affecting the growth and quality of Zanthoxylum.Traditional method for recognizing the degree of infection of Zanthoxylum rust mainly rely on manual experience.Due to the complex colors and shapes of rust areas,the accuracy of manual recognition is low and difficult to be quantified.In recent years,the application of artificial intelligence technology in the agricultural field has gradually increased.In this paper,based on the DeepLabV2 model,we proposed a Zanthoxylum rust image segmentation model based on the FASPP module and enhanced features of rust areas.This paper constructed a fine-grained Zanthoxylum rust image dataset.In this dataset,the Zanthoxylum rust image was segmented and labeled according to leaves,spore piles,and brown lesions.The experimental results showed that the Zanthoxylum rust image segmentation method proposed in this paper was effective.The segmentation accuracy rates of leaves,spore piles and brown lesions reached 99.66%,85.16%and 82.47%respectively.MPA reached 91.80%,and MIoU reached 84.99%.At the same time,the proposed image segmentation model also had good efficiency,which can process 22 images per minute.This article provides an intelligent method for efficiently and accurately recognizing the degree of infection of Zanthoxylum rust.展开更多
Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to ...Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to the inability to effectively capture global information from images,CNNs can easily lead to loss of contours and textures in segmentation results.Notice that the transformer model can effectively capture the properties of long-range dependencies in the image,and furthermore,combining the CNN and the transformer can effectively extract local details and global contextual features of the image.Motivated by this,we propose a multi-branch and multi-scale attention network(M2ANet)for medical image segmentation,whose architecture consists of three components.Specifically,in the first component,we construct an adaptive multi-branch patch module for parallel extraction of image features to reduce information loss caused by downsampling.In the second component,we apply residual block to the well-known convolutional block attention module to enhance the network’s ability to recognize important features of images and alleviate the phenomenon of gradient vanishing.In the third component,we design a multi-scale feature fusion module,in which we adopt adaptive average pooling and position encoding to enhance contextual features,and then multi-head attention is introduced to further enrich feature representation.Finally,we validate the effectiveness and feasibility of the proposed M2ANet method through comparative experiments on four benchmark medical image segmentation datasets,particularly in the context of preserving contours and textures.展开更多
Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of...Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of complex diseases,with some even achieving clinical translation.Changes in the overall size,shape,boundary,and other morphological features of organoids provide a noninvasive method for assessing organoid drug sensitivity.However,the precise segmentation of organoids in bright-field microscopy images is made difficult by the complexity of the organoid morphology and interference,including overlapping organoids,bubbles,dust particles,and cell fragments.This paper introduces the precision organoid segmentation technique(POST),which is a deep-learning algorithm for segmenting challenging organoids under simple bright-field imaging conditions.Unlike existing methods,POST accurately segments each organoid and eliminates various artifacts encountered during organoid culturing and imaging.Furthermore,it is sensitive to and aligns with measurements of organoid activity in drug sensitivity experiments.POST is expected to be a valuable tool for drug screening using organoids owing to its capability of automatically and rapidly eliminating interfering substances and thereby streamlining the organoid analysis and drug screening process.展开更多
Despite its remarkable performance on natural images,the segment anything model(SAM)lacks domain-specific information in medical imaging.and faces the challenge of losing local multi-scale information in the encoding ...Despite its remarkable performance on natural images,the segment anything model(SAM)lacks domain-specific information in medical imaging.and faces the challenge of losing local multi-scale information in the encoding phase.This paper presents a medical image segmentation model based on SAM with a local multi-scale feature encoder(LMSFE-SAM)to address the issues above.Firstly,based on the SAM,a local multi-scale feature encoder is introduced to improve the representation of features within local receptive field,thereby supplying the Vision Transformer(ViT)branch in SAM with enriched local multi-scale contextual information.At the same time,a multiaxial Hadamard product module(MHPM)is incorporated into the local multi-scale feature encoder in a lightweight manner to reduce the quadratic complexity and noise interference.Subsequently,a cross-branch balancing adapter is designed to balance the local and global information between the local multi-scale feature encoder and the ViT encoder in SAM.Finally,to obtain smaller input image size and to mitigate overlapping in patch embeddings,the size of the input image is reduced from 1024×1024 pixels to 256×256 pixels,and a multidimensional information adaptation component is developed,which includes feature adapters,position adapters,and channel-spatial adapters.This component effectively integrates the information from small-sized medical images into SAM,enhancing its suitability for clinical deployment.The proposed model demonstrates an average enhancement ranging from 0.0387 to 0.3191 across six objective evaluation metrics on BUSI,DDTI,and TN3K datasets compared to eight other representative image segmentation models.This significantly enhances the performance of the SAM on medical images,providing clinicians with a powerful tool in clinical diagnosis.展开更多
Background:Diabetic macular edema is a prevalent retinal condition and a leading cause of visual impairment among diabetic patients’Early detection of affected areas is beneficial for effective diagnosis and treatmen...Background:Diabetic macular edema is a prevalent retinal condition and a leading cause of visual impairment among diabetic patients’Early detection of affected areas is beneficial for effective diagnosis and treatment.Traditionally,diagnosis relies on optical coherence tomography imaging technology interpreted by ophthalmologists.However,this manual image interpretation is often slow and subjective.Therefore,developing automated segmentation for macular edema images is essential to enhance to improve the diagnosis efficiency and accuracy.Methods:In order to improve clinical diagnostic efficiency and accuracy,we proposed a SegNet network structure integrated with a convolutional block attention module(CBAM).This network introduces a multi-scale input module,the CBAM attention mechanism,and jump connection.The multi-scale input module enhances the network’s perceptual capabilities,while the lightweight CBAM effectively fuses relevant features across channels and spatial dimensions,allowing for better learning of varying information levels.Results:Experimental results demonstrate that the proposed network achieves an IoU of 80.127%and an accuracy of 99.162%.Compared to the traditional segmentation network,this model has fewer parameters,faster training and testing speed,and superior performance on semantic segmentation tasks,indicating its highly practical applicability.Conclusion:The C-SegNet proposed in this study enables accurate segmentation of Diabetic macular edema lesion images,which facilitates quicker diagnosis for healthcare professionals.展开更多
Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bac...Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bacterial structures,facilitating precise measurement of morphological variations and population behaviors at single-cell resolution.This paper reviews advancements in bacterial image segmentation,emphasizing the shift from traditional thresholding and watershed methods to deep learning-driven approaches.Convolutional neural networks(CNNs),U-Net architectures,and three-dimensional(3D)frameworks excel at segmenting dense biofilms and resolving antibiotic-induced morphological changes.These methods combine automated feature extraction with physics-informed postprocessing.Despite progress,challenges persist in computational efficiency,cross-species generalizability,and integration with multimodal experimental workflows.Future progress will depend on improving model robustness across species and imaging modalities,integrating multimodal data for phenotype-function mapping,and developing standard pipelines that link computational tools with clinical diagnostics.These innovations will expand microbial phenotyping beyond structural analysis,enabling deeper insights into bacterial physiology and ecological interactions.展开更多
Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual infor...Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual information.Although the subsequent U-KAN model enhances nonlinear representation capabilities,it still faces challenges such as gradient vanishing during deep network training and spatial detail loss during feature downsampling,resulting in insufficient segmentation accuracy for edge structures and minute lesions.To address these challenges,this paper proposes the RE-UKAN model,which innovatively improves upon U-KAN.Firstly,a residual network is introduced into the encoder to effectively mitigate gradient vanishing through cross-layer identity mappings,thus enhancing modelling capabilities for complex pathological structures.Secondly,Efficient Local Attention(ELA)is integrated to suppress spatial detail loss during downsampling,thereby improving the perception of edge structures and minute lesions.Experimental results on four public datasets demonstrate that RE-UKAN outperforms existing medical image segmentation methods across multiple evaluation metrics,with particularly outstanding performance on the TN-SCUI 2020 dataset,achieving IoU of 88.18%and Dice of 93.57%.Compared to the baseline model,it achieves improvements of 3.05%and 1.72%,respectively.These results fully demonstrate RE-UKAN’s superior detail retention capability and boundary recognition accuracy in complex medical image segmentation tasks,providing a reliable solution for clinical precision segmentation.展开更多
Semantic segmentation for mixed scenes of aerial remote sensing and road traffic is one of the key technologies for visual perception of flying cars.The State-of-the-Art(SOTA)semantic segmentation methods have made re...Semantic segmentation for mixed scenes of aerial remote sensing and road traffic is one of the key technologies for visual perception of flying cars.The State-of-the-Art(SOTA)semantic segmentation methods have made remarkable achievements in both fine-grained segmentation and real-time performance.However,when faced with the huge differences in scale and semantic categories brought about by the mixed scenes of aerial remote sensing and road traffic,they still face great challenges and there is little related research.Addressing the above issue,this paper proposes a semantic segmentation model specifically for mixed datasets of aerial remote sensing and road traffic scenes.First,a novel decoding-recoding multi-scale feature iterative refinement structure is proposed,which utilizes the re-integration and continuous enhancement of multi-scale information to effectively deal with the huge scale differences between cross-domain scenes,while using a fully convolutional structure to ensure the lightweight and real-time requirements.Second,a welldesigned cross-window attention mechanism combined with a global information integration decoding block forms an enhanced global context perception,which can effectively capture the long-range dependencies and multi-scale global context information of different scenes,thereby achieving fine-grained semantic segmentation.The proposed method is tested on a large-scale mixed dataset of aerial remote sensing and road traffic scenes.The results confirm that it can effectively deal with the problem of large-scale differences in cross-domain scenes.Its segmentation accuracy surpasses that of the SOTA methods,which meets the real-time requirements.展开更多
Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approach...Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments.展开更多
This article studies the problem of image segmentation-based semantic communication in autonomous driving.In real traffic scenes,the detecting of objects(e.g.,vehicles and pedestrians)is more important to guarantee dr...This article studies the problem of image segmentation-based semantic communication in autonomous driving.In real traffic scenes,the detecting of objects(e.g.,vehicles and pedestrians)is more important to guarantee driving safety,which is always ignored in existing works.Therefore,we propose a vehicular image segmentation-oriented semantic communication system,termed VIS-SemCom,focusing on transmitting and recovering image semantic features of high-important objects to reduce transmission redundancy.First,we develop a semantic codec based on Swin Transformer architecture,which expands the perceptual field thus improving the segmentation accuracy.To highlight the important objects'accuracy,we propose a multi-scale semantic extraction method by assigning the number of Swin Transformer blocks for diverse resolution semantic features.Also,an importance-aware loss incorporating important levels is devised,and an online hard example mining(OHEM)strategy is proposed to handle small sample issues in the dataset.Finally,experimental results demonstrate that the proposed VIS-SemCom can achieve a significant mean intersection over union(mIoU)performance in the SNR regions,a reduction of transmitted data volume by about 60%at 60%mIoU,and improve the segmentation accuracy of important objects,compared to baseline image communication.展开更多
The rising need for precision farming and sustainable land management has catalyzed the requirement for sophisticated means of deriving practical data from remote sensing images.Image segmentation,or the process of di...The rising need for precision farming and sustainable land management has catalyzed the requirement for sophisticated means of deriving practical data from remote sensing images.Image segmentation,or the process of dividing the image into semantically relevant parts,has become a groundbreaking technology that allows resolving the problem of transitioning the pixel-level data to a parcel-level analysis.This review is a synthesis of the segmentation methods and their use in crop research and geospatial science.The architectures of pixel-based,object-based,and deep learning(convolutional neural networks,U-Net,Mask R-CNN,and Transformer models)are considered in terms of principles,capabilities,and limitations.Multi-spectral,hyperspectral,LiDAR,and SAR data are integrated to improve the efficiency of segmentation,allowing the possible delineation of fields,the classification of crops,health monitoring,monitoring of yields,and stress identification.In addition to agriculture,segmentation helps in land use and land cover mapping,identification of temporal change,monitoring of the environment,and is used in combination with GIS-based spatial modeling.Nevertheless,issues related to data heterogeneity,mixed pixels,computational requirements,and inadequate availability of labelled data still exist despite the major progress.The future directions involve multi-source data fusion,pixel-to-parcel pipeline automation,and predictive models based on AI,which are used to enhance its scalability,robustness,and the ability to monitor in real-time.This review makes it clear that the use of image segmentation as a tool in generating precision agriculture,sustainable land use,and informed geospatial.展开更多
Automatic and accurate medical image segmentation remains a fundamental task in computer-aided diagnosis and treatment planning.Recent advances in foundation models,such as the medical-focused Segment AnythingModel(Me...Automatic and accurate medical image segmentation remains a fundamental task in computer-aided diagnosis and treatment planning.Recent advances in foundation models,such as the medical-focused Segment AnythingModel(MedSAM),have demonstrated strong performance but face challenges inmanymedical applications due to anatomical complexity and a limited domain-specific prompt.Thiswork introduces amethodology that enhances segmentation robustness and precision by automatically generating multiple informative point prompts,rather than relying on single inputs.The proposed approach randomly samples sets of spatially distributed point prompts based on image features,enabling MedSAM to better capture fine-grained anatomical structures and boundaries.During inference,probability maps are aggregated to reduce local misclassifications without additional model training.Extensive experiments on various computed tomography(CT)and magnetic resonance imaging(MRI)datasets demonstrate improvements in Dice Similarity Coefficient(DSC)and Normalized Surface Dice(NSD)metrics compared to baseline SAM and Scribble Prompt models.A semi-automatic point sampling version based on the ground truth segmentations yielded enhanced results,achieving up to 92.1%DSC and 86.6%NSD,with significant gains in delineating complex organs such as the pancreas,colon,kidney,and brain tumours.The main novelty of our method consists of effectively combining the results of multiple point prompts into the medical segmentation pipeline so that single-point prompt methods are outperformed.Overall,the proposed model offers a straightforward yet effective approach to improve medical image segmentation performance while maintaining computational efficiency.展开更多
High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes an...High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes and wealth of spatial details pose challenges for semantic segmentation.While convolutional neural networks(CNNs)excel at capturing local features,they are limited in modeling long-range dependencies.Conversely,transformers utilize multihead self-attention to integrate global context effectively,but this approach often incurs a high computational cost.This paper proposes a global-local multiscale context network(GLMCNet)to extract both global and local multiscale contextual information from HRSIs.A detail-enhanced filtering module(DEFM)is proposed at the end of the encoder to refine the encoder outputs further,thereby enhancing the key details extracted by the encoder and effectively suppressing redundant information.In addition,a global-local multiscale transformer block(GLMTB)is proposed in the decoding stage to enable the modeling of rich multiscale global and local information.We also design a stair fusion mechanism to transmit deep semantic information from deep to shallow layers progressively.Finally,we propose the semantic awareness enhancement module(SAEM),which further enhances the representation of multiscale semantic features through spatial attention and covariance channel attention.Extensive ablation analyses and comparative experiments were conducted to evaluate the performance of the proposed method.Specifically,our method achieved a mean Intersection over Union(mIoU)of 86.89%on the ISPRS Potsdam dataset and 84.34%on the ISPRS Vaihingen dataset,outperforming existing models such as ABCNet and BANet.展开更多
Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rel...Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping.展开更多
Multilevel image segmentation is a critical task in image analysis,which imposes high requirements on the global search capability and convergence efficiency of segmentation algorithms.In this paper,an improved Artifi...Multilevel image segmentation is a critical task in image analysis,which imposes high requirements on the global search capability and convergence efficiency of segmentation algorithms.In this paper,an improved Artificial Protozoa Optimization algorithm,termed the two-stage Taguchi-assisted Gaussian–Levy Artificial Protozoa Optimization(TGAPO)algorithm,is proposed and applied tomultilevel image segmentation.The proposed algorithm adopts a two-stage evolutionary mechanism.In the first stage,Gaussian perturbation is introduced to enhance local search capability;in the second stage,Levy flight is incorporated to expand the global search range;and finally,the Taguchi strategy is employed to further refine the optimal solution.Consequently,the global optimization performance and robustness of the algorithm are significantly improved.To evaluate the effectiveness of the proposed TGAPO algorithm,comparative experiments are conducted with representative optimization algorithms,including the Grey Wolf Optimizer(GWO)and Particle Swarm Optimization(PSO),in the context ofmultilevel image segmentation.The segmentation quality is assessed using the minimum cross-entropy function as the performance metric.Experimental results demonstrate that the TGAPO algorithm outperforms the comparison algorithms in terms of segmentation accuracy and convergence speed,and exhibits superior stability in high-threshold segmentation tasks.Furthermore,the proposedmethod achieves excellentmulti-threshold segmentation performance for color images and shows strong potential for practical applications.展开更多
Recent studies indicate that millions of individuals suffer from renal diseases,with renal carcinoma,a type of kidney cancer,emerging as both a chronic illness and a significant cause of mortality.Magnetic Resonance I...Recent studies indicate that millions of individuals suffer from renal diseases,with renal carcinoma,a type of kidney cancer,emerging as both a chronic illness and a significant cause of mortality.Magnetic Resonance Imaging(MRI)and Computed Tomography(CT)have become essential tools for diagnosing and assessing kidney disorders.However,accurate analysis of thesemedical images is critical for detecting and evaluating tumor severity.This study introduces an integrated hybrid framework that combines three complementary deep learning models for kidney tumor segmentation from MRI images.The proposed framework fuses a customized U-Net and Mask R-CNN using a weighted scheme to achieve semantic and instance-level segmentation.The fused outputs are further refined through edge detection using Stochastic FeatureMapping Neural Networks(SFMNN),while volumetric consistency is ensured through Improved Mini-Batch K-Means(IMBKM)clustering integrated with an Encoder-Decoder Convolutional Neural Network(EDCNN).The outputs of these three stages are combined through a weighted fusion mechanism,with optimal weights determined empirically.Experiments on MRI scans from the TCGA-KIRC dataset demonstrate that the proposed hybrid framework significantly outperforms standalone models,achieving a Dice Score of 92.5%,an IoU of 87.8%,a Precision of 93.1%,a Recall of 90.8%,and a Hausdorff Distance of 2.8 mm.These findings validate that the weighted integration of complementary architectures effectively overcomes key limitations in kidney tumor segmentation,leading to improved diagnostic accuracy and robustness in medical image analysis.展开更多
The Transformer has achieved great success in the field of medical image segmentation,but its quadratic computational complexity limits its application in dense medical image prediction.Recently,the receptance weighte...The Transformer has achieved great success in the field of medical image segmentation,but its quadratic computational complexity limits its application in dense medical image prediction.Recently,the receptance weighted key value(RWKV)architecture has garnered widespread attention due to its linear computational complexity and its capability of parallel computation during training.Despite the RWKV model's proficiency in addressing long-range modeling tasks with linear computational complexity,most current RWKV-based approaches employ static scanning patterns.These patterns may inadvertently incorporate biased prior knowledge into the model's predictions.To address this challenge,we propose a multi-head scan strategy combined with padding methods to effectively simulate spatial continuity in 2D images.Within the Feature Aggregation Attention(FAA)module,asymmetric convolutions are designed to aggregate 1D sequence features along a single dimension,thereby expanding effective receptive fields while preserving structural sparsity.Additionally,panoramic token shift(P-Shift)effectively models local dependency relationships by moving tokens from a wide receptive field.Extensive experiments conducted on the ISIC17/18 and ACDC datasets demonstrate that our method exhibits superior performance in dense medical image prediction tasks.展开更多
Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional a...Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional approaches often fail in the face of challenges such as low contrast, morphological variability, and densely packed structures. Recent advancements in deep learning have transformed segmentation capabilities through the integration of fine-scale detail preservation, coarse-scale contextual modeling, and multi-scale feature fusion. This work provides a comprehensive analysis of state-of-the-art deep learning models, including U-Net variants, attention-based frameworks, and Transformer-integrated networks, highlighting innovations that improve accuracy, generalizability, and computational efficiency. Key architectural components such as convolution operations, shallow and deep blocks, skip connections, and hybrid encoders are examined for their roles in enhancing spatial representation and semantic consistency. We further discuss the importance of hierarchical and instance-aware segmentation and annotation in interpreting complex biological scenes and multiplexed medical images. By bridging methodological developments with diverse application domains, this paper outlines current trends and future directions for semantic segmentation, emphasizing its critical role in facilitating annotation, diagnosis, and discovery in biomedical research.展开更多
This paper proposes an image segmentation method based on the combination of the wavelet multi-scale edge detection and the entropy iterative threshold selection.Image for segmentation is divided into two parts by hig...This paper proposes an image segmentation method based on the combination of the wavelet multi-scale edge detection and the entropy iterative threshold selection.Image for segmentation is divided into two parts by high- and low-frequency.In the high-frequency part the wavelet multiscale was used for the edge detection,and the low-frequency part conducted on segmentation using the entropy iterative threshold selection method.Through the consideration of the image edge and region,a CT image of the thorax was chosen to test the proposed method for the segmentation of the lungs.Experimental results show that the method is efficient to segment the interesting region of an image compared with conventional methods.展开更多
Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a c...Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a crucial topic of research.With advances in deep learning,researchers have developed numerous methods that combine Transformers and convolutional neural networks(CNNs)to create highly accurate models for medical image segmentation.However,efforts to further enhance accuracy by developing larger and more complex models or training with more extensive datasets,significantly increase computational resource consumption.To address this problem,we propose BiCLIP-nnFormer(the prefix"Bi"refers to the use of two distinct CLIP models),a virtual multimodal instrument that leverages CLIP models to enhance the segmentation performance of a medical segmentation model nnFormer.Since two CLIP models(PMC-CLIP and CoCa-CLIP)are pre-trained on large datasets,they do not require additional training,thus conserving computation resources.These models are used offline to extract image and text embeddings from medical images.These embeddings are then processed by the proposed 3D CLIP adapter,which adapts the CLIP knowledge for segmentation tasks by fine-tuning.Finally,the adapted embeddings are fused with feature maps extracted from the nnFormer encoder for generating predicted masks.This process enriches the representation capabilities of the feature maps by integrating global multimodal information,leading to more precise segmentation predictions.We demonstrate the superiority of BiCLIP-nnFormer and the effectiveness of using CLIP models to enhance nnFormer through experiments on two public datasets,namely the Synapse multi-organ segmentation dataset(Synapse)and the Automatic Cardiac Diagnosis Challenge dataset(ACDC),as well as a self-annotated lung multi-category segmentation dataset(LMCS).展开更多
基金This work was supported by Natural Science Foundation of China(Grant No.62071098)Sichuan Science and Technology Program(Grant Nos.2019YFG0191,2021YFG0307)Sichuan Zizhou Agricultural Science and Technology Co.,Ltd.project:Internet+smart Zanthoxylum planting weather risk warning system.
文摘Zanthoxylum bungeanum Maxim,generally called prickly ash,is widely grown in China.Zanthoxylum rust is the main disease affecting the growth and quality of Zanthoxylum.Traditional method for recognizing the degree of infection of Zanthoxylum rust mainly rely on manual experience.Due to the complex colors and shapes of rust areas,the accuracy of manual recognition is low and difficult to be quantified.In recent years,the application of artificial intelligence technology in the agricultural field has gradually increased.In this paper,based on the DeepLabV2 model,we proposed a Zanthoxylum rust image segmentation model based on the FASPP module and enhanced features of rust areas.This paper constructed a fine-grained Zanthoxylum rust image dataset.In this dataset,the Zanthoxylum rust image was segmented and labeled according to leaves,spore piles,and brown lesions.The experimental results showed that the Zanthoxylum rust image segmentation method proposed in this paper was effective.The segmentation accuracy rates of leaves,spore piles and brown lesions reached 99.66%,85.16%and 82.47%respectively.MPA reached 91.80%,and MIoU reached 84.99%.At the same time,the proposed image segmentation model also had good efficiency,which can process 22 images per minute.This article provides an intelligent method for efficiently and accurately recognizing the degree of infection of Zanthoxylum rust.
基金supported by the Natural Science Foundation of the Anhui Higher Education Institutions of China(Grant Nos.2023AH040149 and 2024AH051915)the Anhui Provincial Natural Science Foundation(Grant No.2208085MF168)+1 种基金the Science and Technology Innovation Tackle Plan Project of Maanshan(Grant No.2024RGZN001)the Scientific Research Fund Project of Anhui Medical University(Grant No.2023xkj122).
文摘Convolutional neural networks(CNNs)-based medical image segmentation technologies have been widely used in medical image segmentation because of their strong representation and generalization abilities.However,due to the inability to effectively capture global information from images,CNNs can easily lead to loss of contours and textures in segmentation results.Notice that the transformer model can effectively capture the properties of long-range dependencies in the image,and furthermore,combining the CNN and the transformer can effectively extract local details and global contextual features of the image.Motivated by this,we propose a multi-branch and multi-scale attention network(M2ANet)for medical image segmentation,whose architecture consists of three components.Specifically,in the first component,we construct an adaptive multi-branch patch module for parallel extraction of image features to reduce information loss caused by downsampling.In the second component,we apply residual block to the well-known convolutional block attention module to enhance the network’s ability to recognize important features of images and alleviate the phenomenon of gradient vanishing.In the third component,we design a multi-scale feature fusion module,in which we adopt adaptive average pooling and position encoding to enhance contextual features,and then multi-head attention is introduced to further enrich feature representation.Finally,we validate the effectiveness and feasibility of the proposed M2ANet method through comparative experiments on four benchmark medical image segmentation datasets,particularly in the context of preserving contours and textures.
基金supported by the National Key R&D Program of China(No.2022YFC2504403)the National Natural Science Foundation of China(No.62172202)+1 种基金the Experiment Project of China Manned Space Program(No.HYZHXM01019)the Fundamental Research Funds for the Central Universities from Southeast University(No.3207032101C3)。
文摘Organoids possess immense potential for unraveling the intricate functions of human tissues and facilitating preclinical disease treatment.Their applications span from high-throughput drug screening to the modeling of complex diseases,with some even achieving clinical translation.Changes in the overall size,shape,boundary,and other morphological features of organoids provide a noninvasive method for assessing organoid drug sensitivity.However,the precise segmentation of organoids in bright-field microscopy images is made difficult by the complexity of the organoid morphology and interference,including overlapping organoids,bubbles,dust particles,and cell fragments.This paper introduces the precision organoid segmentation technique(POST),which is a deep-learning algorithm for segmenting challenging organoids under simple bright-field imaging conditions.Unlike existing methods,POST accurately segments each organoid and eliminates various artifacts encountered during organoid culturing and imaging.Furthermore,it is sensitive to and aligns with measurements of organoid activity in drug sensitivity experiments.POST is expected to be a valuable tool for drug screening using organoids owing to its capability of automatically and rapidly eliminating interfering substances and thereby streamlining the organoid analysis and drug screening process.
基金supported by Natural Science Foundation Programme of Gansu Province(No.24JRRA231)National Natural Science Foundation of China(No.62061023)Gansu Provincial Science and Technology Plan Key Research and Development Program Project(No.24YFFA024).
文摘Despite its remarkable performance on natural images,the segment anything model(SAM)lacks domain-specific information in medical imaging.and faces the challenge of losing local multi-scale information in the encoding phase.This paper presents a medical image segmentation model based on SAM with a local multi-scale feature encoder(LMSFE-SAM)to address the issues above.Firstly,based on the SAM,a local multi-scale feature encoder is introduced to improve the representation of features within local receptive field,thereby supplying the Vision Transformer(ViT)branch in SAM with enriched local multi-scale contextual information.At the same time,a multiaxial Hadamard product module(MHPM)is incorporated into the local multi-scale feature encoder in a lightweight manner to reduce the quadratic complexity and noise interference.Subsequently,a cross-branch balancing adapter is designed to balance the local and global information between the local multi-scale feature encoder and the ViT encoder in SAM.Finally,to obtain smaller input image size and to mitigate overlapping in patch embeddings,the size of the input image is reduced from 1024×1024 pixels to 256×256 pixels,and a multidimensional information adaptation component is developed,which includes feature adapters,position adapters,and channel-spatial adapters.This component effectively integrates the information from small-sized medical images into SAM,enhancing its suitability for clinical deployment.The proposed model demonstrates an average enhancement ranging from 0.0387 to 0.3191 across six objective evaluation metrics on BUSI,DDTI,and TN3K datasets compared to eight other representative image segmentation models.This significantly enhances the performance of the SAM on medical images,providing clinicians with a powerful tool in clinical diagnosis.
基金supported by the Guangdong Pharmaceutical University 2024 Higher Education Research Projects(GKP202403,GMP202402)the Guangdong Pharmaceutical University College Students’Innovation and Entrepreneurship Training Programs(Grant No.202504302033,202504302034,202504302036,and 202504302244).
文摘Background:Diabetic macular edema is a prevalent retinal condition and a leading cause of visual impairment among diabetic patients’Early detection of affected areas is beneficial for effective diagnosis and treatment.Traditionally,diagnosis relies on optical coherence tomography imaging technology interpreted by ophthalmologists.However,this manual image interpretation is often slow and subjective.Therefore,developing automated segmentation for macular edema images is essential to enhance to improve the diagnosis efficiency and accuracy.Methods:In order to improve clinical diagnostic efficiency and accuracy,we proposed a SegNet network structure integrated with a convolutional block attention module(CBAM).This network introduces a multi-scale input module,the CBAM attention mechanism,and jump connection.The multi-scale input module enhances the network’s perceptual capabilities,while the lightweight CBAM effectively fuses relevant features across channels and spatial dimensions,allowing for better learning of varying information levels.Results:Experimental results demonstrate that the proposed network achieves an IoU of 80.127%and an accuracy of 99.162%.Compared to the traditional segmentation network,this model has fewer parameters,faster training and testing speed,and superior performance on semantic segmentation tasks,indicating its highly practical applicability.Conclusion:The C-SegNet proposed in this study enables accurate segmentation of Diabetic macular edema lesion images,which facilitates quicker diagnosis for healthcare professionals.
基金financially supported by the Open Project Program of Wuhan National Laboratory for Optoelectronics(No.2022WNLOKF009)the National Natural Science Foundation of China(No.62475216)+2 种基金the Key Research and Development Program of Shaanxi(No.2024GH-ZDXM-37)the Fujian Provincial Natural Science Foundation of China(No.2024J01060)the Startup Program of XMU,and the Fundamental Research Funds for the Central Universities.
文摘Microscopy imaging is fundamental in analyzing bacterial morphology and dynamics,offering critical insights into bacterial physiology and pathogenicity.Image segmentation techniques enable quantitative analysis of bacterial structures,facilitating precise measurement of morphological variations and population behaviors at single-cell resolution.This paper reviews advancements in bacterial image segmentation,emphasizing the shift from traditional thresholding and watershed methods to deep learning-driven approaches.Convolutional neural networks(CNNs),U-Net architectures,and three-dimensional(3D)frameworks excel at segmenting dense biofilms and resolving antibiotic-induced morphological changes.These methods combine automated feature extraction with physics-informed postprocessing.Despite progress,challenges persist in computational efficiency,cross-species generalizability,and integration with multimodal experimental workflows.Future progress will depend on improving model robustness across species and imaging modalities,integrating multimodal data for phenotype-function mapping,and developing standard pipelines that link computational tools with clinical diagnostics.These innovations will expand microbial phenotyping beyond structural analysis,enabling deeper insights into bacterial physiology and ecological interactions.
文摘Medical image segmentation is of critical importance in the domain of contemporary medical imaging.However,U-Net and its variants exhibit limitations in capturing complex nonlinear patterns and global contextual information.Although the subsequent U-KAN model enhances nonlinear representation capabilities,it still faces challenges such as gradient vanishing during deep network training and spatial detail loss during feature downsampling,resulting in insufficient segmentation accuracy for edge structures and minute lesions.To address these challenges,this paper proposes the RE-UKAN model,which innovatively improves upon U-KAN.Firstly,a residual network is introduced into the encoder to effectively mitigate gradient vanishing through cross-layer identity mappings,thus enhancing modelling capabilities for complex pathological structures.Secondly,Efficient Local Attention(ELA)is integrated to suppress spatial detail loss during downsampling,thereby improving the perception of edge structures and minute lesions.Experimental results on four public datasets demonstrate that RE-UKAN outperforms existing medical image segmentation methods across multiple evaluation metrics,with particularly outstanding performance on the TN-SCUI 2020 dataset,achieving IoU of 88.18%and Dice of 93.57%.Compared to the baseline model,it achieves improvements of 3.05%and 1.72%,respectively.These results fully demonstrate RE-UKAN’s superior detail retention capability and boundary recognition accuracy in complex medical image segmentation tasks,providing a reliable solution for clinical precision segmentation.
基金supported by the National Key Research and Development of China(No.2022YFB2503400).
文摘Semantic segmentation for mixed scenes of aerial remote sensing and road traffic is one of the key technologies for visual perception of flying cars.The State-of-the-Art(SOTA)semantic segmentation methods have made remarkable achievements in both fine-grained segmentation and real-time performance.However,when faced with the huge differences in scale and semantic categories brought about by the mixed scenes of aerial remote sensing and road traffic,they still face great challenges and there is little related research.Addressing the above issue,this paper proposes a semantic segmentation model specifically for mixed datasets of aerial remote sensing and road traffic scenes.First,a novel decoding-recoding multi-scale feature iterative refinement structure is proposed,which utilizes the re-integration and continuous enhancement of multi-scale information to effectively deal with the huge scale differences between cross-domain scenes,while using a fully convolutional structure to ensure the lightweight and real-time requirements.Second,a welldesigned cross-window attention mechanism combined with a global information integration decoding block forms an enhanced global context perception,which can effectively capture the long-range dependencies and multi-scale global context information of different scenes,thereby achieving fine-grained semantic segmentation.The proposed method is tested on a large-scale mixed dataset of aerial remote sensing and road traffic scenes.The results confirm that it can effectively deal with the problem of large-scale differences in cross-domain scenes.Its segmentation accuracy surpasses that of the SOTA methods,which meets the real-time requirements.
基金funded by the National Natural Science Foundation of China,grant numbers 52374156 and 62476005。
文摘Images taken in dim environments frequently exhibit issues like insufficient brightness,noise,color shifts,and loss of detail.These problems pose significant challenges to dark image enhancement tasks.Current approaches,while effective in global illumination modeling,often struggle to simultaneously suppress noise and preserve structural details,especially under heterogeneous lighting.Furthermore,misalignment between luminance and color channels introduces additional challenges to accurate enhancement.In response to the aforementioned difficulties,we introduce a single-stage framework,M2ATNet,using the multi-scale multi-attention and Transformer architecture.First,to address the problems of texture blurring and residual noise,we design a multi-scale multi-attention denoising module(MMAD),which is applied separately to the luminance and color channels to enhance the structural and texture modeling capabilities.Secondly,to solve the non-alignment problem of the luminance and color channels,we introduce the multi-channel feature fusion Transformer(CFFT)module,which effectively recovers the dark details and corrects the color shifts through cross-channel alignment and deep feature interaction.To guide the model to learn more stably and efficiently,we also fuse multiple types of loss functions to form a hybrid loss term.We extensively evaluate the proposed method on various standard datasets,including LOL-v1,LOL-v2,DICM,LIME,and NPE.Evaluation in terms of numerical metrics and visual quality demonstrate that M2ATNet consistently outperforms existing advanced approaches.Ablation studies further confirm the critical roles played by the MMAD and CFFT modules to detail preservation and visual fidelity under challenging illumination-deficient environments.
基金National Natural Science Foundation of China under Grants No.62171047,U22B2001,62271065,62001051Beijing Natural Science Foundation under Grant L223027BUPT Excellent Ph.D Students Foundation under Grants CX2021114。
文摘This article studies the problem of image segmentation-based semantic communication in autonomous driving.In real traffic scenes,the detecting of objects(e.g.,vehicles and pedestrians)is more important to guarantee driving safety,which is always ignored in existing works.Therefore,we propose a vehicular image segmentation-oriented semantic communication system,termed VIS-SemCom,focusing on transmitting and recovering image semantic features of high-important objects to reduce transmission redundancy.First,we develop a semantic codec based on Swin Transformer architecture,which expands the perceptual field thus improving the segmentation accuracy.To highlight the important objects'accuracy,we propose a multi-scale semantic extraction method by assigning the number of Swin Transformer blocks for diverse resolution semantic features.Also,an importance-aware loss incorporating important levels is devised,and an online hard example mining(OHEM)strategy is proposed to handle small sample issues in the dataset.Finally,experimental results demonstrate that the proposed VIS-SemCom can achieve a significant mean intersection over union(mIoU)performance in the SNR regions,a reduction of transmitted data volume by about 60%at 60%mIoU,and improve the segmentation accuracy of important objects,compared to baseline image communication.
基金supported under the 2024 Foshan City Self-Funded Science and Technology Innovation Project“Research on Image Segmentation Technology Based on Convolutional Neural Networks in Crop Images”(Project Number:2420001004686).
文摘The rising need for precision farming and sustainable land management has catalyzed the requirement for sophisticated means of deriving practical data from remote sensing images.Image segmentation,or the process of dividing the image into semantically relevant parts,has become a groundbreaking technology that allows resolving the problem of transitioning the pixel-level data to a parcel-level analysis.This review is a synthesis of the segmentation methods and their use in crop research and geospatial science.The architectures of pixel-based,object-based,and deep learning(convolutional neural networks,U-Net,Mask R-CNN,and Transformer models)are considered in terms of principles,capabilities,and limitations.Multi-spectral,hyperspectral,LiDAR,and SAR data are integrated to improve the efficiency of segmentation,allowing the possible delineation of fields,the classification of crops,health monitoring,monitoring of yields,and stress identification.In addition to agriculture,segmentation helps in land use and land cover mapping,identification of temporal change,monitoring of the environment,and is used in combination with GIS-based spatial modeling.Nevertheless,issues related to data heterogeneity,mixed pixels,computational requirements,and inadequate availability of labelled data still exist despite the major progress.The future directions involve multi-source data fusion,pixel-to-parcel pipeline automation,and predictive models based on AI,which are used to enhance its scalability,robustness,and the ability to monitor in real-time.This review makes it clear that the use of image segmentation as a tool in generating precision agriculture,sustainable land use,and informed geospatial.
基金supported by the Autonomous Government of Andalusia(Spain)under project UMA20-FEDERJA-108also by the Ministry of Science and Innovation of Spain,grant number PID2022-136764OA-I00+1 种基金It includes funds fromthe European Regional Development Fund(ERDF),It is also partially supported by the Fundación Unicaja(PUNI-003_2023)the Instituto de Investigación Biomédica de Málaga y Plataforma en Nanomedicina-IBIMA Plataforma BIONAND(ATECH-25-02).
文摘Automatic and accurate medical image segmentation remains a fundamental task in computer-aided diagnosis and treatment planning.Recent advances in foundation models,such as the medical-focused Segment AnythingModel(MedSAM),have demonstrated strong performance but face challenges inmanymedical applications due to anatomical complexity and a limited domain-specific prompt.Thiswork introduces amethodology that enhances segmentation robustness and precision by automatically generating multiple informative point prompts,rather than relying on single inputs.The proposed approach randomly samples sets of spatially distributed point prompts based on image features,enabling MedSAM to better capture fine-grained anatomical structures and boundaries.During inference,probability maps are aggregated to reduce local misclassifications without additional model training.Extensive experiments on various computed tomography(CT)and magnetic resonance imaging(MRI)datasets demonstrate improvements in Dice Similarity Coefficient(DSC)and Normalized Surface Dice(NSD)metrics compared to baseline SAM and Scribble Prompt models.A semi-automatic point sampling version based on the ground truth segmentations yielded enhanced results,achieving up to 92.1%DSC and 86.6%NSD,with significant gains in delineating complex organs such as the pancreas,colon,kidney,and brain tumours.The main novelty of our method consists of effectively combining the results of multiple point prompts into the medical segmentation pipeline so that single-point prompt methods are outperformed.Overall,the proposed model offers a straightforward yet effective approach to improve medical image segmentation performance while maintaining computational efficiency.
基金provided by the Science Research Project of Hebei Education Department under grant No.BJK2024115.
文摘High-resolution remote sensing images(HRSIs)are now an essential data source for gathering surface information due to advancements in remote sensing data capture technologies.However,their significant scale changes and wealth of spatial details pose challenges for semantic segmentation.While convolutional neural networks(CNNs)excel at capturing local features,they are limited in modeling long-range dependencies.Conversely,transformers utilize multihead self-attention to integrate global context effectively,but this approach often incurs a high computational cost.This paper proposes a global-local multiscale context network(GLMCNet)to extract both global and local multiscale contextual information from HRSIs.A detail-enhanced filtering module(DEFM)is proposed at the end of the encoder to refine the encoder outputs further,thereby enhancing the key details extracted by the encoder and effectively suppressing redundant information.In addition,a global-local multiscale transformer block(GLMTB)is proposed in the decoding stage to enable the modeling of rich multiscale global and local information.We also design a stair fusion mechanism to transmit deep semantic information from deep to shallow layers progressively.Finally,we propose the semantic awareness enhancement module(SAEM),which further enhances the representation of multiscale semantic features through spatial attention and covariance channel attention.Extensive ablation analyses and comparative experiments were conducted to evaluate the performance of the proposed method.Specifically,our method achieved a mean Intersection over Union(mIoU)of 86.89%on the ISPRS Potsdam dataset and 84.34%on the ISPRS Vaihingen dataset,outperforming existing models such as ABCNet and BANet.
基金funded by the Research Project:THTETN.05/24-25,VietnamAcademy of Science and Technology.
文摘Satellite image segmentation plays a crucial role in remote sensing,supporting applications such as environmental monitoring,land use analysis,and disaster management.However,traditional segmentation methods often rely on large amounts of labeled data,which are costly and time-consuming to obtain,especially in largescale or dynamic environments.To address this challenge,we propose the Semi-Supervised Multi-View Picture Fuzzy Clustering(SS-MPFC)algorithm,which improves segmentation accuracy and robustness,particularly in complex and uncertain remote sensing scenarios.SS-MPFC unifies three paradigms:semi-supervised learning,multi-view clustering,and picture fuzzy set theory.This integration allows the model to effectively utilize a small number of labeled samples,fuse complementary information from multiple data views,and handle the ambiguity and uncertainty inherent in satellite imagery.We design a novel objective function that jointly incorporates picture fuzzy membership functions across multiple views of the data,and embeds pairwise semi-supervised constraints(must-link and cannot-link)directly into the clustering process to enhance segmentation accuracy.Experiments conducted on several benchmark satellite datasets demonstrate that SS-MPFC significantly outperforms existing state-of-the-art methods in segmentation accuracy,noise robustness,and semantic interpretability.On the Augsburg dataset,SS-MPFC achieves a Purity of 0.8158 and an Accuracy of 0.6860,highlighting its outstanding robustness and efficiency.These results demonstrate that SSMPFC offers a scalable and effective solution for real-world satellite-based monitoring systems,particularly in scenarios where rapid annotation is infeasible,such as wildfire tracking,agricultural monitoring,and dynamic urban mapping.
文摘Multilevel image segmentation is a critical task in image analysis,which imposes high requirements on the global search capability and convergence efficiency of segmentation algorithms.In this paper,an improved Artificial Protozoa Optimization algorithm,termed the two-stage Taguchi-assisted Gaussian–Levy Artificial Protozoa Optimization(TGAPO)algorithm,is proposed and applied tomultilevel image segmentation.The proposed algorithm adopts a two-stage evolutionary mechanism.In the first stage,Gaussian perturbation is introduced to enhance local search capability;in the second stage,Levy flight is incorporated to expand the global search range;and finally,the Taguchi strategy is employed to further refine the optimal solution.Consequently,the global optimization performance and robustness of the algorithm are significantly improved.To evaluate the effectiveness of the proposed TGAPO algorithm,comparative experiments are conducted with representative optimization algorithms,including the Grey Wolf Optimizer(GWO)and Particle Swarm Optimization(PSO),in the context ofmultilevel image segmentation.The segmentation quality is assessed using the minimum cross-entropy function as the performance metric.Experimental results demonstrate that the TGAPO algorithm outperforms the comparison algorithms in terms of segmentation accuracy and convergence speed,and exhibits superior stability in high-threshold segmentation tasks.Furthermore,the proposedmethod achieves excellentmulti-threshold segmentation performance for color images and shows strong potential for practical applications.
基金funded by the Ongoing Research Funding Program-Research Chairs(ORF-RC-2025-2400),King Saud University,Riyadh,Saudi Arabia。
文摘Recent studies indicate that millions of individuals suffer from renal diseases,with renal carcinoma,a type of kidney cancer,emerging as both a chronic illness and a significant cause of mortality.Magnetic Resonance Imaging(MRI)and Computed Tomography(CT)have become essential tools for diagnosing and assessing kidney disorders.However,accurate analysis of thesemedical images is critical for detecting and evaluating tumor severity.This study introduces an integrated hybrid framework that combines three complementary deep learning models for kidney tumor segmentation from MRI images.The proposed framework fuses a customized U-Net and Mask R-CNN using a weighted scheme to achieve semantic and instance-level segmentation.The fused outputs are further refined through edge detection using Stochastic FeatureMapping Neural Networks(SFMNN),while volumetric consistency is ensured through Improved Mini-Batch K-Means(IMBKM)clustering integrated with an Encoder-Decoder Convolutional Neural Network(EDCNN).The outputs of these three stages are combined through a weighted fusion mechanism,with optimal weights determined empirically.Experiments on MRI scans from the TCGA-KIRC dataset demonstrate that the proposed hybrid framework significantly outperforms standalone models,achieving a Dice Score of 92.5%,an IoU of 87.8%,a Precision of 93.1%,a Recall of 90.8%,and a Hausdorff Distance of 2.8 mm.These findings validate that the weighted integration of complementary architectures effectively overcomes key limitations in kidney tumor segmentation,leading to improved diagnostic accuracy and robustness in medical image analysis.
基金Supported by Zhejiang Provincial Natural Science Foundation of China(LY22F020025)the National Natural Science Foundation of China(62072126)。
文摘The Transformer has achieved great success in the field of medical image segmentation,but its quadratic computational complexity limits its application in dense medical image prediction.Recently,the receptance weighted key value(RWKV)architecture has garnered widespread attention due to its linear computational complexity and its capability of parallel computation during training.Despite the RWKV model's proficiency in addressing long-range modeling tasks with linear computational complexity,most current RWKV-based approaches employ static scanning patterns.These patterns may inadvertently incorporate biased prior knowledge into the model's predictions.To address this challenge,we propose a multi-head scan strategy combined with padding methods to effectively simulate spatial continuity in 2D images.Within the Feature Aggregation Attention(FAA)module,asymmetric convolutions are designed to aggregate 1D sequence features along a single dimension,thereby expanding effective receptive fields while preserving structural sparsity.Additionally,panoramic token shift(P-Shift)effectively models local dependency relationships by moving tokens from a wide receptive field.Extensive experiments conducted on the ISIC17/18 and ACDC datasets demonstrate that our method exhibits superior performance in dense medical image prediction tasks.
基金Open Access funding provided by the National Institutes of Health(NIH)The funding for this project was provided by NCATS Intramural Fund.
文摘Semantic segmentation plays a foundational role in biomedical image analysis, providing precise information about cellular, tissue, and organ structures in both biological and medical imaging modalities. Traditional approaches often fail in the face of challenges such as low contrast, morphological variability, and densely packed structures. Recent advancements in deep learning have transformed segmentation capabilities through the integration of fine-scale detail preservation, coarse-scale contextual modeling, and multi-scale feature fusion. This work provides a comprehensive analysis of state-of-the-art deep learning models, including U-Net variants, attention-based frameworks, and Transformer-integrated networks, highlighting innovations that improve accuracy, generalizability, and computational efficiency. Key architectural components such as convolution operations, shallow and deep blocks, skip connections, and hybrid encoders are examined for their roles in enhancing spatial representation and semantic consistency. We further discuss the importance of hierarchical and instance-aware segmentation and annotation in interpreting complex biological scenes and multiplexed medical images. By bridging methodological developments with diverse application domains, this paper outlines current trends and future directions for semantic segmentation, emphasizing its critical role in facilitating annotation, diagnosis, and discovery in biomedical research.
基金Science Research Foundation of Yunnan Fundamental Research Foundation of Applicationgrant number:2009ZC049M+1 种基金Science Research Foundation for the Overseas Chinese Scholars,State Education Ministrygrant number:2010-1561
文摘This paper proposes an image segmentation method based on the combination of the wavelet multi-scale edge detection and the entropy iterative threshold selection.Image for segmentation is divided into two parts by high- and low-frequency.In the high-frequency part the wavelet multiscale was used for the edge detection,and the low-frequency part conducted on segmentation using the entropy iterative threshold selection method.Through the consideration of the image edge and region,a CT image of the thorax was chosen to test the proposed method for the segmentation of the lungs.Experimental results show that the method is efficient to segment the interesting region of an image compared with conventional methods.
基金funded by the National Natural Science Foundation of China(Grant No.6240072655)the Hubei Provincial Key Research and Development Program(Grant No.2023BCB151)+1 种基金the Wuhan Natural Science Foundation Exploration Program(Chenguang Program,Grant No.2024040801020202)the Natural Science Foundation of Hubei Province of China(Grant No.2025AFB148).
文摘Image segmentation is attracting increasing attention in the field of medical image analysis.Since widespread utilization across various medical applications,ensuring and improving segmentation accuracy has become a crucial topic of research.With advances in deep learning,researchers have developed numerous methods that combine Transformers and convolutional neural networks(CNNs)to create highly accurate models for medical image segmentation.However,efforts to further enhance accuracy by developing larger and more complex models or training with more extensive datasets,significantly increase computational resource consumption.To address this problem,we propose BiCLIP-nnFormer(the prefix"Bi"refers to the use of two distinct CLIP models),a virtual multimodal instrument that leverages CLIP models to enhance the segmentation performance of a medical segmentation model nnFormer.Since two CLIP models(PMC-CLIP and CoCa-CLIP)are pre-trained on large datasets,they do not require additional training,thus conserving computation resources.These models are used offline to extract image and text embeddings from medical images.These embeddings are then processed by the proposed 3D CLIP adapter,which adapts the CLIP knowledge for segmentation tasks by fine-tuning.Finally,the adapted embeddings are fused with feature maps extracted from the nnFormer encoder for generating predicted masks.This process enriches the representation capabilities of the feature maps by integrating global multimodal information,leading to more precise segmentation predictions.We demonstrate the superiority of BiCLIP-nnFormer and the effectiveness of using CLIP models to enhance nnFormer through experiments on two public datasets,namely the Synapse multi-organ segmentation dataset(Synapse)and the Automatic Cardiac Diagnosis Challenge dataset(ACDC),as well as a self-annotated lung multi-category segmentation dataset(LMCS).